qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/3] qom: new object to associate device to numa node


From: Jonathan Cameron
Subject: Re: [PATCH v2 1/3] qom: new object to associate device to numa node
Date: Thu, 12 Oct 2023 09:59:54 +0100

On Wed, 11 Oct 2023 17:37:11 +0000
Vikram Sethi <vsethi@nvidia.com> wrote:

> Hi Jonathan,
> 
> > -----Original Message-----
> > From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
> > Sent: Monday, October 9, 2023 7:27 AM
> > To: Ankit Agrawal <ankita@nvidia.com>
> > Cc: Jason Gunthorpe <jgg@nvidia.com>; alex.williamson@redhat.com;
> > clg@redhat.com; shannon.zhaosl@gmail.com; peter.maydell@linaro.org;
> > ani@anisinha.ca; berrange@redhat.com; eduardo@habkost.net;
> > imammedo@redhat.com; mst@redhat.com; eblake@redhat.com;
> > armbru@redhat.com; david@redhat.com; gshan@redhat.com; Aniket
> > Agashe <aniketa@nvidia.com>; Neo Jia <cjia@nvidia.com>; Kirti Wankhede
> > <kwankhede@nvidia.com>; Tarun Gupta (SW-GPU) <targupta@nvidia.com>;
> > Vikram Sethi <vsethi@nvidia.com>; Andy Currid <acurrid@nvidia.com>;
> > Dheeraj Nigam <dnigam@nvidia.com>; Uday Dhoke <udhoke@nvidia.com>;
> > qemu-arm@nongnu.org; qemu-devel@nongnu.org; Dave Jiang
> > <dave.jiang@intel.com>
> > Subject: Re: [PATCH v2 1/3] qom: new object to associate device to numa
> > node
> > 
> > 
> > On Sun, 8 Oct 2023 01:47:38 +0530
> > <ankita@nvidia.com> wrote:
> >   
> > > From: Ankit Agrawal <ankita@nvidia.com>
> > >
> > > The CPU cache coherent device memory can be added as NUMA nodes
> > > distinct from the system memory nodes. These nodes are associated with
> > > the device and Qemu needs a way to maintain this link.  
> > 
> > Hi Ankit,
> > 
> > I'm not sure I'm convinced of the approach to creating nodes for memory
> > usage (or whether that works in Linux on all NUMA ACPI archs), but I am
> > keen to see Generic Initiator support in QEMU. I'd also like to see it done 
> > in a
> > way that naturally extends to Generic Ports which are very similar (but 
> > don't
> > hang memory off them! :) Dave Jiang posted a PoC a while back for generic
> > ports.
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flore.
> > kernel.org%2Fqemu-
> > devel%2F168185633821.899932.322047053764766056.stgit%40djiang5-
> > mobl3%2F&data=05%7C01%7Cvsethi%40nvidia.com%7C846b19f87bc5424d
> > c33608dbc8c3015d%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7
> > C638324512146712954%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjA
> > wMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%
> > 7C%7C&sdata=v318MXognoITHyv7AFqZAfvUi2hLy2ZUVnLvyQ2IAfY%3D&res
> > erved=0
> > 
> > My concern with this approach is that it is using a side effect of a Linux
> > implementation detail that the infra structure to bring up coherent memory
> > is all present even for a GI only node (if it is which I can't recall) I'm 
> > also fairly
> > sure we never tidied up the detail of going from the GI to the device in 
> > Linux
> > (because it's harder than a _PXM entry for the device).  It requires 
> > stashing a
> > better description than the BDF before potentially doing reenumeration so
> > that we can rebuild the association after that is done.
> >   
> 
> I'm not sure I understood the concern. Are you suggesting that the ACPI 
> specification
> somehow prohibits memory associated with a GI node in the same PXM? i.e 
> whether the GI is memory-less
> or with memory isn't mandated by the spec IIRC. Certainly seems perfectly 
> normal
> for an accelerator with memory to have a GI with memory and that memory be 
> able to be associated with the same PXM. 

Indeed reasonable that a GI would have associated memory, but if it's
"normal memory" (i.e. coherent and not device private memory accessed by PCI bar
etc) then expectation would be that memory is in SRAT as a memory entry.
Which brings us back to the original question of whether 0 sized memory nodes
are fine.

> So what about this patch is using a Linux implementation detail? Even if 
> Linux wasn't currently supporting
> that use case, it is something that would have been reasonable to add IMO. 
> What am I missing?

Linux is careful to only bring up the infrastructure for specific types of 
roximity node. It works its way through SRAT and sets appropriate bitmap bits
to say which combination of PXM node types a given node is. (CPU, Memory, GI 
etc)

After that walk is done it then brings up various infrastructure.
What I can't remember (you'll need to experiment) is if there is anything not
brought up for a non Memory node that you would need.  Might be fine, but that
doesn't mean it will remain fine.  Maybe we just need to make sure the 
documentation
/ comments in Linux cover this use case.  You are on your own for what other
OSes decided is valid here as the specifcation does not mention this AFAIK.
If it does then add a reference.

There is a non trivial (potential) cost to enabling facilities on NUMA nodes 
that
will never make use of them - a bunch of longer searches etc when looking
for memory.  For GIs we enable pretty much everything a CPU node uses.
That was controversial though only well after support was already in - the
controversy being that it added costs to paths that didn't care about GIs.

Basically it boils down to using unexpected corners of specifications may
prove fragile.

For one thing I'm doubtful if the NUMA description the kernel exposes (coming
from a subset of HMAT) won't deal with this case.  Not tried it though
so you may be lucky.

Jonathan





reply via email to

[Prev in Thread] Current Thread [Next in Thread]