qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hw/riscv: split RAM into low and high memory


From: Anup Patel
Subject: Re: [PATCH] hw/riscv: split RAM into low and high memory
Date: Thu, 7 Sep 2023 21:27:10 +0530

On Thu, Sep 7, 2023 at 8:48 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Thu, Aug 3, 2023 at 10:47 AM Wu, Fei <fei2.wu@intel.com> wrote:
> >
> > On 8/1/2023 6:46 AM, Daniel Henrique Barboza wrote:
> > >
> > >
> > > On 7/30/23 22:53, Fei Wu wrote:
> > >> riscv virt platform's memory started at 0x80000000 and
> > >> straddled the 4GiB boundary. Curiously enough, this choice
> > >> of a memory layout will prevent from launching a VM with
> > >> a bit more than 2000MiB and PCIe pass-thru on an x86 host, due
> > >> to identity mapping requirements for the MSI doorbell on x86,
> > >> and these (APIC/IOAPIC) live right below 4GiB.
> > >>
> > >> So just split the RAM range into two portions:
> > >> - 1 GiB range from 0x80000000 to 0xc0000000.
> > >> - The remainder at 0x100000000
> > >>
> > >> ...leaving a hole between the ranges.
> > >
> > > I am afraid this breaks some existing distro setups, like Ubuntu. After
> > > this patch
> > > this emulation stopped working:
> > >
> > > ~/work/qemu/build/qemu-system-riscv64 \
> > >     -machine virt -nographic -m 8G -smp 8 \
> > >     -kernel ./uboot-ubuntu/usr/lib/u-boot/qemu-riscv64_smode/uboot.elf \
> > >     -drive file=snapshot.img,format=qcow2,if=virtio \
> > >     -netdev bridge,id=bridge1,br=virbr0 -device
> > > virtio-net-pci,netdev=bridge1
> > >
> > >
> > > This is basically a guest created via the official Canonical tutorial:
> > >
> > > https://wiki.ubuntu.com/RISC-V/QEMU
> > >
> > > The error being thrown:
> > >
> > > =================
> > >
> > > Boot HART ID              : 4
> > > Boot HART Domain          : root
> > > Boot HART Priv Version    : v1.12
> > > Boot HART Base ISA        : rv64imafdch
> > > Boot HART ISA Extensions  : time,sstc
> > > Boot HART PMP Count       : 16
> > > Boot HART PMP Granularity : 4
> > > Boot HART PMP Address Bits: 54
> > > Boot HART MHPM Count      : 16
> > > Boot HART MIDELEG         : 0x0000000000001666
> > > Boot HART MEDELEG         : 0x0000000000f0b509
> > >
> > >
> > > U-Boot 2022.07+dfsg-1ubuntu4.2 (Nov 24 2022 - 18:47:41 +0000)
> > >
> > > CPU:
> > > rv64imafdch_zicbom_zicboz_zicsr_zifencei_zihintpause_zawrs_zfa_zca_zcd_zba_zbb_zbc_zbs_sstc_svadu
> > > Model: riscv-virtio,qemu
> > > DRAM:  Unhandled exception: Store/AMO access fault
> > > EPC: 00000000802018b8 RA: 00000000802126a0 TVAL: 00000000ff733f90
> > >
> > > Code: b823 06b2 bc23 06b2 b023 08b2 b423 08b2 (b823 08b2)
> > >
> > >
> > > resetting ...
> > > System reset not supported on this platform
> > > ### ERROR ### Please RESET the board ###
> > > QEMU 8.0.90 monitor - type 'help' for more infor
> > > =================
> > >
> > >
> > > Based on the change made I can make an educated guess on what is going
> > > wrong.
> > > We have another board with a similar memory topology you're making here,
> > > the
> > > Microchip Polarfire (microchip_pfsoc.c). We were having some problems
> > > with this
> > > board while trying to consolidate the boot process between all boards in
> > > hw/riscv/boot.c because of its non-continuous RAM bank. The full story
> > > can be
> > > read in the commit message of 4b402886ac89 ("hw/riscv: change
> > > riscv_compute_fdt_addr()
> > > semantics") but the short version can be seen in riscv_compute_fdt_addr()
> > > from boot.c:
> > >
> > >  - if ram_start is less than 3072MiB, the FDT will be  put at the lowest
> > > value
> > > between 3072 MiB and the end of that RAM bank;
> > >
> > > - if ram_start is higher than 3072 MiB the FDT will be put at the end of
> > > the
> > > RAM bank.
> > >
> > > So, after this patch, since riscv_compute_fdt_addr() is being used with
> > > the now
> > > lower RAM bank, the fdt is being put in LOW_MEM - fdt_size for any setup
> > > that has
> > > more than 1Gb RAM, and this breaks assumptions made by uboot and Ubuntu
> > > and possibly
> > > others that are trying to retrieve the FDT from the gap that you created
> > > between
> > > low and hi mem in this patch.
> > >
> > > In fact, this same Ubuntu guest I mentioned above will boot if I put
> > > only 1 Gb of RAM
> > > (-m 1Gb). If I try with -m 1.1Gb I reproduce this error. This can be a
> > > validation of
> > > the guess I'm making here: Ubuntu is trying to fetch stuff (probably the
> > > fdt) from
> > > the gap between the memory areas.
> > >
> > > This change on top of this patch doesn't work either:
> > >
> > > $ git diff
> > > diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> > > index 8fbdc7220c..dfff48d849 100644
> > > --- a/hw/riscv/virt.c
> > > +++ b/hw/riscv/virt.c
> > > @@ -1335,9 +1335,16 @@ static void virt_machine_done(Notifier *notifier,
> > > void *data)
> > >                                           kernel_start_addr, true, NULL);
> > >      }
> > >
> > > -    fdt_load_addr = riscv_compute_fdt_addr(memmap[VIRT_DRAM].base,
> > > +    if (machine->ram_size < memmap[VIRT_DRAM].size) {
> > > +        fdt_load_addr = riscv_compute_fdt_addr(memmap[VIRT_DRAM].base,
> > >                                             memmap[VIRT_DRAM].size,
> > >                                             machine);
> > > +    } else {
> > > +        fdt_load_addr =
> > > riscv_compute_fdt_addr(memmap[VIRT_DRAM_HIGH].base,
> > > +                                           memmap[VIRT_DRAM_HIGH].size,
> > > +                                           machine);
> > > +    }
> > > +
> > >
> > > This would put the fdt at the end of the HI RAM for guests with more
> > > than 1Gb of RAM.
> > > This change in fact makes the situation even worse, breaking setups that
> > > were working
> > > before with this patch.
> > >
> > > There's a chance that reducing the gap between the RAM banks can make
> > > Ubuntu work
> > > reliably again but now I'm a little cold feet with this change.
> > >
> > >
> > > I think we'll need some kernel/Opensbi folks to weight in here to see if
> > > there's a
> > > failsafe memory setup that won't break distros out there and allow your
> > > passthrough
> > > to work.
> > >
> > Hi Daniel,
> >
> > Do you know who we should talk to? I think it's not uncommon to have
> > seperated multi-range memory, the stack should support this configuration.
>
> @Palmer Dabbelt @Anup Patel any ideas?
>

Sparse memory and multiple memory banks are a reality and seen
on many platforms.

Based on my experience:
* Linux RISC-V already supports SPARSEMEM and multiple
  RAM banks.
* OpenSBI also handles multiple RAM banks nicely.
* U-Boot requires some kconfig option setting otherwise
   even U-Boot handles it fine.

Regards,
Anup



reply via email to

[Prev in Thread] Current Thread [Next in Thread]