qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 4/5] i386/pc: relocate 4g start to 1T where applicable


From: Joao Martins
Subject: Re: [PATCH v4 4/5] i386/pc: relocate 4g start to 1T where applicable
Date: Fri, 13 May 2022 19:28:57 +0100

On 5/13/22 16:06, Joao Martins wrote:
> On 5/13/22 13:33, Michael S. Tsirkin wrote:
>> On Wed, Apr 20, 2022 at 09:11:37PM +0100, Joao Martins wrote:
>>> It is assumed that the whole GPA space is available to be DMA
>>> addressable, within a given address space limit, expect for a
>>> tiny region before the 4G. Since Linux v5.4, VFIO validates
>>> whether the selected GPA is indeed valid i.e. not reserved by
>>> IOMMU on behalf of some specific devices or platform-defined
>>> restrictions, and thus failing the ioctl(VFIO_DMA_MAP) with
>>>  -EINVAL.
>>>
>>> AMD systems with an IOMMU are examples of such platforms and
>>> particularly may only have these ranges as allowed:
>>>
>>>     0000000000000000 - 00000000fedfffff (0      .. 3.982G)
>>>     00000000fef00000 - 000000fcffffffff (3.983G .. 1011.9G)
>>>     0000010000000000 - ffffffffffffffff (1Tb    .. 16Pb[*])
>>>
>>> We already account for the 4G hole, albeit if the guest is big
>>> enough we will fail to allocate a guest with  >1010G due to the
>>> ~12G hole at the 1Tb boundary, reserved for HyperTransport (HT).
>>>
>>> [*] there is another reserved region unrelated to HT that exists
>>> in the 256T boundaru in Fam 17h according to Errata #1286,
>>> documeted also in "Open-Source Register Reference for AMD Family
>>> 17h Processors (PUB)"
>>>
>>> When creating the region above 4G, take into account that on AMD
>>> platforms the HyperTransport range is reserved and hence it
>>> cannot be used either as GPAs. On those cases rather than
>>> establishing the start of ram-above-4g to be 4G, relocate instead
>>> to 1Tb. See AMD IOMMU spec, section 2.1.2 "IOMMU Logical
>>> Topology", for more information on the underlying restriction of
>>> IOVAs.
>>>
>>> After accounting for the 1Tb hole on AMD hosts, mtree should
>>> look like:
>>>
>>> 0000000000000000-000000007fffffff (prio 0, i/o):
>>>      alias ram-below-4g @pc.ram 0000000000000000-000000007fffffff
>>> 0000010000000000-000001ff7fffffff (prio 0, i/o):
>>>     alias ram-above-4g @pc.ram 0000000080000000-000000ffffffffff
>>>
>>> If the relocation is done, we also add the the reserved HT
>>> e820 range as reserved.
>>>
>>> Default phys-bits on Qemu is TCG_PHYS_BITS (40) which is enough
>>> to address 1Tb (0xff ffff ffff). On AMD platforms, if a
>>> ram-above-4g relocation may be desired and the CPU wasn't configured
>>> with a big enough phys-bits, print an error message to the user
>>> and do not make the relocation of the above-4g-region if phys-bits
>>> is too low.
>>>
>>> Suggested-by: Igor Mammedov <imammedo@redhat.com>
>>> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
>>> ---
>>>  hw/i386/pc.c | 111 +++++++++++++++++++++++++++++++++++++++++++++++++++
>>>  1 file changed, 111 insertions(+)
>>>
>>> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
>>> index 8eaa32ee2106..aac32ba0bd02 100644
>>> --- a/hw/i386/pc.c
>>> +++ b/hw/i386/pc.c
>>> @@ -803,6 +803,110 @@ void xen_load_linux(PCMachineState *pcms)
>>>  #define PC_ROM_ALIGN       0x800
>>>  #define PC_ROM_SIZE        (PC_ROM_MAX - PC_ROM_MIN_VGA)
>>>  
>>> +/*
>>> + * AMD systems with an IOMMU have an additional hole close to the
>>> + * 1Tb, which are special GPAs that cannot be DMA mapped. Depending
>>> + * on kernel version, VFIO may or may not let you DMA map those ranges.
>>> + * Starting Linux v5.4 we validate it, and can't create guests on AMD 
>>> machines
>>> + * with certain memory sizes. It's also wrong to use those IOVA ranges
>>> + * in detriment of leading to IOMMU INVALID_DEVICE_REQUEST or worse.
>>> + * The ranges reserved for Hyper-Transport are:
>>> + *
>>> + * FD_0000_0000h - FF_FFFF_FFFFh
>>> + *
>>> + * The ranges represent the following:
>>> + *
>>> + * Base Address   Top Address  Use
>>> + *
>>> + * FD_0000_0000h FD_F7FF_FFFFh Reserved interrupt address space
>>> + * FD_F800_0000h FD_F8FF_FFFFh Interrupt/EOI IntCtl
>>> + * FD_F900_0000h FD_F90F_FFFFh Legacy PIC IACK
>>> + * FD_F910_0000h FD_F91F_FFFFh System Management
>>> + * FD_F920_0000h FD_FAFF_FFFFh Reserved Page Tables
>>> + * FD_FB00_0000h FD_FBFF_FFFFh Address Translation
>>> + * FD_FC00_0000h FD_FDFF_FFFFh I/O Space
>>> + * FD_FE00_0000h FD_FFFF_FFFFh Configuration
>>> + * FE_0000_0000h FE_1FFF_FFFFh Extended Configuration/Device Messages
>>> + * FE_2000_0000h FF_FFFF_FFFFh Reserved
>>> + *
>>> + * See AMD IOMMU spec, section 2.1.2 "IOMMU Logical Topology",
>>> + * Table 3: Special Address Controls (GPA) for more information.
>>> + */
>>> +#define AMD_HT_START         0xfd00000000UL
>>> +#define AMD_HT_END           0xffffffffffUL
>>> +#define AMD_ABOVE_1TB_START  (AMD_HT_END + 1)
>>> +#define AMD_HT_SIZE          (AMD_ABOVE_1TB_START - AMD_HT_START)
>>> +
>>> +static hwaddr x86_max_phys_addr(PCMachineState *pcms,
>>> +                                hwaddr above_4g_mem_start,
>>> +                                uint64_t pci_hole64_size)
>>> +{
>>> +    PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
>>> +    X86MachineState *x86ms = X86_MACHINE(pcms);
>>> +    MachineState *machine = MACHINE(pcms);
>>> +    ram_addr_t device_mem_size = 0;
>>> +    hwaddr base;
>>> +
>>> +    if (!x86ms->above_4g_mem_size) {
>>> +       /*
>>> +        * 32-bit pci hole goes from
>>> +        * end-of-low-ram (@below_4g_mem_size) to IOAPIC.
>>> +        */
>>> +        return IO_APIC_DEFAULT_ADDRESS - 1;
>>> +    }
>>> +
>>> +    if (pcmc->has_reserved_memory &&
>>> +       (machine->ram_size < machine->maxram_size)) {
>>> +        device_mem_size = machine->maxram_size - machine->ram_size;
>>> +    }
>>> +
>>> +    base = ROUND_UP(above_4g_mem_start + x86ms->above_4g_mem_size +
>>> +                    pcms->sgx_epc.size, 1 * GiB);
>>> +
>>> +    return base + device_mem_size + pci_hole64_size;
>>> +}
>>> +
>>> +static void x86_update_above_4g_mem_start(PCMachineState *pcms,
>>> +                                          uint64_t pci_hole64_size)
>>> +{
>>> +    X86MachineState *x86ms = X86_MACHINE(pcms);
>>> +    CPUX86State *env = &X86_CPU(first_cpu)->env;
>>> +    hwaddr start = x86ms->above_4g_mem_start;
>>> +    hwaddr maxphysaddr, maxusedaddr;
>>> +
>>> +    /*
>>> +     * The HyperTransport range close to the 1T boundary is unique to AMD
>>> +     * hosts with IOMMUs enabled. Restrict the ram-above-4g relocation
>>> +     * to above 1T to AMD vCPUs only.
>>> +     */
>>> +    if (!IS_AMD_CPU(env)) {
>>> +        return;
>>> +    }
>>> +
>>> +    /* Bail out if max possible address does not cross HT range */
>>> +    if (x86_max_phys_addr(pcms, start, pci_hole64_size) < AMD_HT_START) {
>>> +        return;
>>> +    }
>>> +
>>> +    /*
>>> +     * Relocating ram-above-4G requires more than TCG_PHYS_BITS (40).

I've eat a word here and it should be TCG_PHYS_ADDR_BITS and not TCG_PHYS_BITS.

>>> +     * So make sure phys-bits is required to be appropriately sized in 
>>> order
>>> +     * to proceed with the above-4g-region relocation and thus boot.
>>> +     */
>>> +    start = AMD_ABOVE_1TB_START;
>>> +    maxphysaddr = ((hwaddr)1 << X86_CPU(first_cpu)->phys_bits) - 1;
>>> +    maxusedaddr = x86_max_phys_addr(pcms, start, pci_hole64_size);
>>> +    if (maxphysaddr < maxusedaddr) {
>>> +        error_report("Address space limit 0x%"PRIx64" < 0x%"PRIx64
>>> +                     " phys-bits too low (%u) cannot avoid AMD HT range",
>>> +                     maxphysaddr, maxusedaddr, 
>>> X86_CPU(first_cpu)->phys_bits);
>>> +        exit(EXIT_FAILURE);
>>> +    }
>>> +
>>> +
>>> +    x86ms->above_4g_mem_start = start;
>>> +}
>>> +
>>>  void pc_memory_init(PCMachineState *pcms,
>>>                      MemoryRegion *system_memory,
>>>                      MemoryRegion *rom_memory,
>>> @@ -823,6 +927,8 @@ void pc_memory_init(PCMachineState *pcms,
>>>  
>>>      linux_boot = (machine->kernel_filename != NULL);
>>>  
>>> +    x86_update_above_4g_mem_start(pcms, pci_hole64_size);
>>> +
>>>      /*
>>>       * Split single memory region and use aliases to address portions of 
>>> it,
>>>       * done for backwards compatibility with older qemus.
>>> @@ -833,6 +939,11 @@ void pc_memory_init(PCMachineState *pcms,
>>>                               0, x86ms->below_4g_mem_size);
>>>      memory_region_add_subregion(system_memory, 0, ram_below_4g);
>>>      e820_add_entry(0, x86ms->below_4g_mem_size, E820_RAM);
>>> +
>>> +    if (x86ms->above_4g_mem_start == AMD_ABOVE_1TB_START) {
>>> +        e820_add_entry(AMD_HT_START, AMD_HT_SIZE, E820_RESERVED);
>>> +    }
>>
>>
>> Causes a warning (and so a build failure) on 32 bit mingw:
>>
>> ../qemu/hw/i386/pc.c: In function 'pc_memory_init':
>> ../qemu/hw/i386/pc.c:939:35: error: comparison is always false due to 
>> limited range of data type [-Werror=type-limits]
>>   939 |     if (x86ms->above_4g_mem_start == AMD_ABOVE_1TB_START) {
>>       |                                   ^~
>> cc1: all warnings being treated as errors
>>
>>
>> Looking at the code, how is it supposed to work on 32 bit?
>> It's ok if it does not work but I'd like a graceful failure
>> not a silent corruption.
>>
> It didn't work on 32-bit qemu binaries -- My apologies for my oversight.
> 
> This diff below fixes the 2 warnings you got.
> The rest of the added code uses hwaddr which is defined
> as a uint64_t already (...)
> 

Towards your second question, I had fixed a couple issues in v2->v3 wrt
on 32-bit boundary calculations, but 32-bit binaries I am not sure
it is supposed to work with more than 4G (or even less). First, we are
limited by TARGET_PHYS_ADDR_SPACE_BITS being 36 and the address space
check in the code above blocks if phys bits aren't enough before it attempts
changing at anything related to AMD above-4g-start, but that's target-defined
and comes a lot after. Even before that we get the 'ram size too large' after
casting the memory size to a ram_addr. At least from trying out x86_64 target
compiled in 32-bit. The functions above are basically not changing
above_mem_4g_start at all and hence the test -- after fixing warnings -- will
always be false.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]