qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: virtio-pci in qemu-system-arm is broken in 8.2


From: Peter Maydell
Subject: Re: virtio-pci in qemu-system-arm is broken in 8.2
Date: Thu, 4 Jan 2024 14:31:40 +0000

On Thu, 4 Jan 2024 at 14:09, Peter Maydell <peter.maydell@linaro.org> wrote:
>
> On Thu, 21 Dec 2023 at 22:00, Alex Bennée <alex.bennee@linaro.org> wrote:
> > modified   tests/avocado/tuxrun_baselines.py
> > @@ -168,7 +168,7 @@ def run_tuxtest_tests(self, haltmsg):
> >      def common_tuxrun(self,
> >                        csums=None,
> >                        dt=None,
> > -                      drive="virtio-blk-device",
> > +                      drive="virtio-blk-pci",
> >                        haltmsg="reboot: System halted",
> >                        console_index=0):
> >          """
> >
> > And then we get:
> >
> >  (1/3) ./tests/avocado/tuxrun_baselines.py:TuxRunBaselineTest.test_armv5: 
> > PASS (5.64 s)
> >  (2/3) ./tests/avocado/tuxrun_baselines.py:TuxRunBaselineTest.test_armv7: 
> > FAIL: Failure message found in console: "Kernel panic - not syncing". 
> > Expected: "Welcome to TuxTest" (1.21 s)
> >  (3/3) ./tests/avocado/tuxrun_baselines.py:TuxRunBaselineTest.test_armv7be: 
> > FAIL: Failure message found in console: "Kernel panic - not syncing". 
> > Expected: "Welcome to TuxTest" (1.24 s)
> > RESULTS    : PASS 1 | ERROR 0 | FAIL 2 | SKIP 0 | WARN 0 | INTERRUPT 0 | 
> > CANCEL 0
> > JOB TIME   : 8.50 s
> >
> > So I guess this somehow hits ARMv7 only. Maybe something about I/O
> > access?
> >
> >   2023-12-21 18:21:29,424 __init__         L0153 DEBUG| pl061_gpio 
> > 9030000.pl061: PL061 GPIO chip registered
> >   2023-12-21 18:21:29,427 __init__         L0153 DEBUG| pci-host-generic 
> > 4010000000.pcie: host bridge /pcie@10000000 ranges:
> >   2023-12-21 18:21:29,428 __init__         L0153 DEBUG| pci-host-generic 
> > 4010000000.pcie:       IO 0x003eff0000..0x003effffff -> 0x0000000000
> >   2023-12-21 18:21:29,428 __init__         L0153 DEBUG| pci-host-generic 
> > 4010000000.pcie:      MEM 0x0010000000..0x003efeffff -> 0x0010000000
> >   2023-12-21 18:21:29,428 __init__         L0153 DEBUG| pci-host-generic 
> > 4010000000.pcie:      MEM 0x8000000000..0xffffffffff -> 0x8000000000
> >   2023-12-21 18:21:29,429 __init__         L0153 DEBUG| pci-host-generic 
> > 4010000000.pcie: can't claim ECAM area [mem 0x10000000-0x1fffffff]: address 
> > conflict with pcie@10000000 [mem 0x10000000-0x3efeffff]
> >   2023-12-21 18:21:29,429 __init__         L0153 DEBUG| pci-host-generic: 
> > probe of 4010000000.pcie failed with error -16
>
> I suspect that this is not the same issue.
> You still see this failure even with commit 4446a22b96d1be
> reverted; and if you run QEMU with "-machine virt,highmem=off"
> which disables the high memory regions on QEMU's end, the test
> proceeds to a login prompt.
>
> Either the kernel incorrectly thinks the regions overlap
> because it's misreading the dtb, or else we had a regression
> in the virt board with how we set the base address for the
> upper PCI windows.

Looks like it's the kernel getting this wrong. In the DTB
we say:
reg = <0x40 0x10000000 0x00 0x10000000>;
meaning the ECAM region is at 0x40_1000_0000, size 0x1000_0000.
But the kernel thinks:
ECAM area [mem 0x10000000-0x1fffffff]
so it has clearly truncated the address at some point.

There was a bug in non-LPAE kernels ages ago of a similar form,
but it was fixed already in this kernel version, and in any
case this kernel seems to have LPAE enabled. So it must be
a separate bug.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]