qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 5/8] acpi/gpex: Append pxb devs in ascending order


From: Igor Mammedov
Subject: Re: [PATCH v3 5/8] acpi/gpex: Append pxb devs in ascending order
Date: Tue, 5 Jan 2021 01:21:36 +0100

On Wed, 30 Dec 2020 16:17:14 -0500
"Michael S. Tsirkin" <mst@redhat.com> wrote:

> On Tue, Dec 29, 2020 at 02:47:35PM +0100, Igor Mammedov wrote:
> > On Wed, 23 Dec 2020 17:08:33 +0800
> > Jiahui Cen <cenjiahui@huawei.com> wrote:
> >   
> > > The overlap check of IO resource window would fail when Linux kernel
> > > registers an IO resource [b, c) earlier than another resource [a, b).
> > > Though this incorrect check could be fixed by [1], it would be better to
> > > append pxb devs into DSDT table in ascending order.
> > > 
> > > [1]: 
> > > https://lore.kernel.org/lkml/20201218062335.5320-1-cenjiahui@huawei.com/  
> > 
> > considering there is acceptable fix for kernel I'd rather avoid
> > workarounds on QEMU side. So I suggest dropping this patch.  
> 
> Well there's something to be said for a defined order of things.
> And patch is from Dec 2020 will take ages for guests to be fixed,
> and changing pci core on stable kernels is risky and needs
> a ton of testing, not done eaily ...
> Which guests are affected by the bug?
it's workaround for a trivial bug for niche configuration
for a new QEMU feature that never worked for arm/virt machine
Downstream that think  that it is important enough to support
can backport and test patch thus helping stable trees to merge
it sooner.


> There are also some issues with the patch see below.
> 
> > it also should reduce noise in [8/8] masking other changes.
> >   
> > > Signed-off-by: Jiahui Cen <cenjiahui@huawei.com>
> > > ---
> > >  hw/pci-host/gpex-acpi.c | 11 +++++++++--
> > >  1 file changed, 9 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/hw/pci-host/gpex-acpi.c b/hw/pci-host/gpex-acpi.c
> > > index 4bf1e94309..95a7a0f12b 100644
> > > --- a/hw/pci-host/gpex-acpi.c
> > > +++ b/hw/pci-host/gpex-acpi.c
> > > @@ -141,7 +141,7 @@ static void acpi_dsdt_add_pci_osc(Aml *dev)
> > >  void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig *cfg)
> > >  {
> > >      int nr_pcie_buses = cfg->ecam.size / PCIE_MMCFG_SIZE_MIN;
> > > -    Aml *method, *crs, *dev, *rbuf;
> > > +    Aml *method, *crs, *dev, *rbuf, *pxb_devs[nr_pcie_buses];  
> 
> dynamically sized array on stack poses security issues
> 
> > >      PCIBus *bus = cfg->bus;
> > >      CrsRangeSet crs_range_set;
> > >      CrsRangeEntry *entry;
> > > @@ -149,6 +149,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig 
> > > *cfg)
> > >  
> > >      /* start to construct the tables for pxb */
> > >      crs_range_set_init(&crs_range_set);
> > > +    memset(pxb_devs, 0, sizeof(pxb_devs));
> > >      if (bus) {
> > >          QLIST_FOREACH(bus, &bus->child, sibling) {
> > >              uint8_t bus_num = pci_bus_num(bus);
> > > @@ -190,7 +191,7 @@ void acpi_dsdt_add_gpex(Aml *scope, struct GPEXConfig 
> > > *cfg)
> > >  
> > >              acpi_dsdt_add_pci_osc(dev);
> > >  
> > > -            aml_append(scope, dev);
> > > +            pxb_devs[bus_num] = dev;  
> 
> If bus numbers intersect this will overwrite old one.
> I'd rather not worry about it, just have an array
> of structs with bus numbers and sort it.
> 
> 
> > >          }
> > >      }
> > >  
> > > @@ -278,5 +279,11 @@ void acpi_dsdt_add_gpex(Aml *scope, struct 
> > > GPEXConfig *cfg)
> > >      aml_append(dev, dev_res0);
> > >      aml_append(scope, dev);
> > >  
> > > +    for (i = 0; i < ARRAY_SIZE(pxb_devs); i++) {
> > > +        if (pxb_devs[i]) {
> > > +            aml_append(scope, pxb_devs[i]);
> > > +        }
> > > +    }  
> 
> 
> so this sorts them by bus number not by io address.
> Probably happens to help since bios numbers them in the same order ...
> Is there a way to address this more robustly in case
> bios changes? E.g. I see the bug is only in PIO so sort by that address maybe?
> 
> Also pls add a code comment explaining why we are doing all this
> with link to patch, which guests are affected etc.
> 
> > > +
> > >      crs_range_set_free(&crs_range_set);
> > >  }  
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]