qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hw/riscv: split RAM into low and high memory


From: Eric Auger
Subject: Re: [PATCH] hw/riscv: split RAM into low and high memory
Date: Thu, 7 Sep 2023 15:49:16 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0

Hi,

On 9/7/23 12:04, Wu, Fei wrote:
> On 9/7/2023 5:10 PM, Eric Auger wrote:
>> Hi,
>>
>> On 9/7/23 09:16, Philippe Mathieu-Daudé wrote:
>>> Widening Cc to ARM/VFIO.
>>>
>>> On 4/8/23 11:15, Wu, Fei wrote:
>>>> On 8/3/2023 11:07 PM, Andrew Jones wrote:
>>>>> On Mon, Jul 31, 2023 at 09:53:17AM +0800, Fei Wu wrote:
>>>>>> riscv virt platform's memory started at 0x80000000 and
>>>>>> straddled the 4GiB boundary. Curiously enough, this choice
>>>>>> of a memory layout will prevent from launching a VM with
>>>>>> a bit more than 2000MiB and PCIe pass-thru on an x86 host, due
>>>>>> to identity mapping requirements for the MSI doorbell on x86,
>>>>>> and these (APIC/IOAPIC) live right below 4GiB.
>>>>>>
>>>>>> So just split the RAM range into two portions:
>>>>>> - 1 GiB range from 0x80000000 to 0xc0000000.
>>>>>> - The remainder at 0x100000000
>>>>>>
>>>>>> ...leaving a hole between the ranges.
>>>>> Can you elaborate on the use case? Maybe provide details of the host
>>>>> system and the QEMU command line? I'm wondering why we didn't have
>>>>> any problems with the arm virt machine type. Has nobody tried this
>>>>> use case with that? Is the use case something valid for riscv, but
>>>>> not arm?
>>>>>
>>>> Firstly we have to enable pcie passthru on host, find the device groups,
>>>> e.g. the vga card, and add their pci ids to host kernel cmdline:
>>>>     vfio-pci.ids=10de:0f02,10de:0e08
>>>>
>>>> then start vm through qemu as follows:
>>>> $Q -machine virt -m 4G -smp 4 -nographic \
>>>>    -bios /usr/lib/riscv64-linux-gnu/opensbi/generic/fw_jump.elf \
>>>>    -kernel ./vmlinuz -initrd initrd.img -append "root=/dev/vda1 rw" \
>>>>    -drive
>>>> file=ubuntu-22.04.1-preinstalled-server-riscv64+unmatched.img,if=virtio,format=raw
>>>>
>>>> \
>>>>    -device vfio-pci,host=01:00.0 -device vfio-pci,host=01:00.1 \
>>>>    -netdev user,id=vnet,hostfwd=:127.0.0.1:2223-:22 -device
>>>> virtio-net-pci,netdev=vnet
>>>>
>>>> Without this patch, qemu exits immediately instead of boots up.
>>>>
>>>> Just tried pcie passthru on arm, it cannot handle 4G memory either.
>>>> $Q -m 4G -smp 4 -cpu max -M virt -nographic \
>>>>    -pflash /usr/share/AAVMF/AAVMF_CODE.fd -pflash flash1.img \
>>>>    -drive if=none,file=ubuntu-22.04-server-cloudimg-arm64.img,id=hd0 \
>>>>    -device virtio-blk-device,drive=hd0 \
>>>>    -device vfio-pci,host=01:00.0 -device vfio-pci,host=01:00.1
>>>>
>>>> qemu-system-aarch64: -device vfio-pci,host=01:00.0: VFIO_MAP_DMA failed:
>>>> Invalid argument
>>>> qemu-system-aarch64: -device vfio-pci,host=01:00.0: vfio 0000:01:00.0:
>>>> failed to setup container for group 11: memory listener initialization
>>>> failed: Region mach-virt.ram: vfio_dma_map(0x55de3c2a97f0, 0x40000000,
>>>> 0x100000000, 0x7f8fcbe00000) = -22 (Invalid argument)
>> The collision between the x86 host MSI reserved region [0xfee00000,
>> 0xfeefffff] and the ARM guest RAM starting at 1GB has also always
>> existed. But now this collision is properly detected instead of being
>> silenced. People have not really complained about this so far. Since the
>> existing guest RAM layout couldn't be changed, I am afraid we couldn't
>> do much.
>>
> Just as what this patch does for riscv, arm guest on x86 could do the
> same thing to adjust the guest RAM layout if necessary? This looks like
> a nice-to-have feature for arm if there still is a requirement to run
> arm guest on x86.

Well Peter was opposed to any change in the legacy RAM layout. At some
point on ARM VIRT we added some extra RAM but never changed the existing
layout . But actually nobody ever complained about the lack of support
of this UC (TCG arm guest with host assigned device on x86 host).

Eric
>
> Thanks,
> Fei.
>
>> Eric
>>>> Thanks,
>>>> Fei.
>>>>
>>>>> Thanks,
>>>>> drew
>>>>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]