qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/3] hw/arm/virt: Honor highmem setting when computing highes


From: Eric Auger
Subject: Re: [PATCH 2/3] hw/arm/virt: Honor highmem setting when computing highest_gpa
Date: Wed, 8 Sep 2021 09:16:28 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.1

Hi
On 9/7/21 2:58 PM, Peter Maydell wrote:
> On Sun, 22 Aug 2021 at 15:45, Marc Zyngier <maz@kernel.org> wrote:
>> Even when the VM is configured with highmem=off, the highest_gpa
>> field includes devices that are above the 4GiB limit, which is
>> what highmem=off is supposed to enforce. This leads to failures
>> in virt_kvm_type() on systems that have a crippled IPA range,
>> as the reported IPA space is larger than what it should be.
>>
>> Instead, honor the user-specified limit to only use the devices
>> at the lowest end of the spectrum.
>>
>> Note that this doesn't affect memory, which is still allowed to
>> go beyond 4GiB with highmem=on configurations.
>>
>> Cc: Andrew Jones <drjones@redhat.com>
>> Cc: Eric Auger <eric.auger@redhat.com>
>> Cc: Peter Maydell <peter.maydell@linaro.org>
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>  hw/arm/virt.c | 10 +++++++---
>>  1 file changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
>> index 81eda46b0b..bc189e30b8 100644
>> --- a/hw/arm/virt.c
>> +++ b/hw/arm/virt.c
>> @@ -1598,7 +1598,7 @@ static uint64_t virt_cpu_mp_affinity(VirtMachineState 
>> *vms, int idx)
>>  static void virt_set_memmap(VirtMachineState *vms)
>>  {
>>      MachineState *ms = MACHINE(vms);
>> -    hwaddr base, device_memory_base, device_memory_size;
>> +    hwaddr base, device_memory_base, device_memory_size, ceiling;
>>      int i;
>>
>>      vms->memmap = extended_memmap;
>> @@ -1625,7 +1625,7 @@ static void virt_set_memmap(VirtMachineState *vms)
>>      device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * 
>> GiB;
>>
>>      /* Base address of the high IO region */
>> -    base = device_memory_base + ROUND_UP(device_memory_size, GiB);
>> +    ceiling = base = device_memory_base + ROUND_UP(device_memory_size, GiB);
>>      if (base < device_memory_base) {
>>          error_report("maxmem/slots too huge");
>>          exit(EXIT_FAILURE);
>> @@ -1642,7 +1642,11 @@ static void virt_set_memmap(VirtMachineState *vms)
>>          vms->memmap[i].size = size;
>>          base += size;
>>      }
>> -    vms->highest_gpa = base - 1;
>> +    if (vms->highmem) {
>> +           /* If we have highmem, move the IPA limit to the top */
>> +           ceiling = base;
>> +    }
>> +    vms->highest_gpa = ceiling - 1;
> This doesn't look right to me. If highmem is false and the
> high IO region would be above the 4GB mark then we should not
> create the high IO region at all, surely? This code looks like
> it goes ahead and puts the high IO region above 4GB and then
> lies in the highest_gpa value about what the highest used GPA is.
>
> -- PMM
>
Doesn't the problem come from "if maxram_size is < 255GiB we keep the
legacy memory map" and set base = vms->memmap[VIRT_MEM].base +
LEGACY_RAMLIMIT_BYTES; leading to IO regions allocated above?
Instead shouldn't we condition this to highmem=on only then?

But by the way do we need to added extended_memmap IO regions at all if
highmem=off?
I am not wrong the VIRT_HIGH_PCIE_ECAM and VIRT_HIGH_PCIE_MMIO only are
used if highmem=on. In create_pcie(), base_mmio_high/size_mmio_high are
used if vms->highmem and we have ecam_id =
VIRT_ECAM_ID(vms->highmem_ecam); with vms->highmem_ecam &= vms->highmem
&& (!firmware_loaded || aarch64);

So if I do not miss anything maybe we could skip the allocation of the
extended_memmap IO regions if highmem=off?

And doesn't it look reasonable to limit the number of vcpus if highmem=off?

Thanks

Eric




reply via email to

[Prev in Thread] Current Thread [Next in Thread]