qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 1/2] amd_iommu: Fix pte_override_page_mask()


From: Peter Maydell
Subject: Re: [PULL 1/2] amd_iommu: Fix pte_override_page_mask()
Date: Fri, 23 Apr 2021 17:11:33 +0100

On Fri, 23 Apr 2021 at 14:35, Jean-Philippe Brucker
<jean-philippe@linaro.org> wrote:
>
> On Fri, Apr 23, 2021 at 02:01:19PM +0100, Peter Maydell wrote:
> > On Thu, 22 Apr 2021 at 23:24, Michael S. Tsirkin <mst@redhat.com> wrote:
> > >
> > > From: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > >
> > > AMD IOMMU PTEs have a special mode allowing to specify an arbitrary page
> > > size. Quoting the AMD IOMMU specification: "When the Next Level bits [of
> > > a pte] are 7h, the size of the page is determined by the first zero bit
> > > in the page address, starting from bit 12."
> > >
> > > So if the lowest bits of the page address is 0, the page is 8kB. If the
> > > lowest bits are 011, the page is 32kB. Currently pte_override_page_mask()
> > > doesn't compute the right value for this page size and amdvi_translate()
> > > can return the wrong guest-physical address. With a Linux guest, DMA
> > > from SATA devices accesses the wrong memory and causes probe failure:
> > >
> > > qemu-system-x86_64 ... -device amd-iommu -drive 
> > > id=hd1,file=foo.bin,if=none \
> > >                 -device ahci,id=ahci -device ide-hd,drive=hd1,bus=ahci.0
> > > [    6.613093] ata1.00: qc timeout (cmd 0xec)
> > > [    6.615062] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)
> > >
> > > Fix the page mask.
> > >
> > > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> > > Message-Id: <20210421084007.1190546-1-jean-philippe@linaro.org>
> > > Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >
> > Jean-Philippe, do you know if this is a regression since 5.2?
>
> I don't think so, I can reproduce it with v5.2.0.

OK, thanks; I think I favour not putting this into rc5, then.

-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]