qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/3] qom: Link multiple numa nodes to device using a new o


From: Alex Williamson
Subject: Re: [PATCH v2 3/3] qom: Link multiple numa nodes to device using a new object
Date: Tue, 17 Oct 2023 09:21:16 -0600

On Tue, 17 Oct 2023 14:00:54 +0000
Ankit Agrawal <ankita@nvidia.com> wrote:

> >>         -device 
> >>vfio-pci-nohotplug,host=0009:01:00.0,bus=pcie.0,addr=04.0,rombar=0,id=dev0 \
> >>         -object 
> >>nvidia-acpi-generic-initiator,id=gi0,device=dev0,numa-node-start=2,numa-node-count=8
> >>  
> >
> > Why didn't we just implement start and count in the base object (or a
> > list)? It seems like this gives the nvidia-acpi-generic-initiator two
> > different ways to set gi->node, either node= of the parent or
> > numa-node-start= here.  Once we expose the implicit node count in the
> > base object, I'm not sure the purpose of this object.  I would have
> > thought it for keying the build of the NVIDIA specific _DSD, but that's
> > not implemented in this version.  
> 
> Agree, allowing a list of nodes to be provided to the acpi-generic-initiator
> will remove the need for the nvidia-acpi-generic-initiator object. 

And what happened to the _DSD?  Is it no longer needed?  Why?

> > I also don't see any programatic means for management tools to know how
> > many nodes to create.  For example what happens if there's a MIGv2 that
> > supports 16 partitions by default and makes use of the same vfio-pci
> > variant driver?  Thanks,  
> 
> It is supposed to stay at 8 for all the G+H devices. Maybe this can be managed
> through proper documentation in the user manual?

I thought the intention here was that a management tool would
automatically configure the VM with these nodes and GI object in
support of the device.  Planning only for Grace-Hopper isn't looking
too far into the future and it's difficult to make software that can
reference a user manual.  This leads to a higher maintenance burden
where the management tool needs to recognize not only the driver, but
the device bound to the driver and update as new devices are released.
The management tool will never automatically support new devices without
making an assumption about the node configuration.

Do we therefore need some programatic means for the kernel driver to
expose the node configuration to userspace?  What interfaces would
libvirt like to see here?  Is there an opportunity that this could
begin to define flavors or profiles for variant devices like we have
types for mdev devices where the node configuration would be
encompassed in a device profile?  Thanks,

Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]