qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 15/22] Add iommufd configure option


From: Daniel P . Berrangé
Subject: Re: [PATCH v1 15/22] Add iommufd configure option
Date: Wed, 20 Sep 2023 19:15:53 +0100
User-agent: Mutt/2.2.9 (2022-11-12)

On Wed, Sep 20, 2023 at 12:01:42PM -0600, Alex Williamson wrote:
> On Wed, 20 Sep 2023 03:42:20 +0000
> "Duan, Zhenzhong" <zhenzhong.duan@intel.com> wrote:
> 
> > >-----Original Message-----
> > >From: Cédric Le Goater <clg@redhat.com>
> > >Sent: Wednesday, September 20, 2023 1:08 AM
> > >Subject: Re: [PATCH v1 15/22] Add iommufd configure option
> > >
> > >On 8/30/23 12:37, Zhenzhong Duan wrote:  
> > >> This adds "--enable-iommufd/--disable-iommufd" to enable or disable
> > >> iommufd support, enabled by default.  
> > >
> > >Why would someone want to disable support at compile time ? It might  
> > 
> > For those users who only want to support legacy container feature?
> > Let me know if you still prefer to drop this patch, I'm fine with that.
> > 
> > >have been useful for dev but now QEMU should self-adjust at runtime
> > >depending only on the host capabilities AFAIUI. Am I missing something ?  
> > 
> > IOMMUFD doesn't support all features of legacy container, so QEMU
> > doesn't self-adjust at runtime by checking if host supports IOMMUFD.
> > We need to specify it explicitly to use IOMMUFD as below:
> > 
> >     -object iommufd,id=iommufd0
> >     -device vfio-pci,host=0000:02:00.0,iommufd=iommufd0
> 
> There's an important point here that maybe we've let slip for too long.
> Laine had asked in an internal forum whether the switch to IOMMUFD was
> visible to the guest.  I replied that it wasn't, but this note about
> IOMMUFD vs container features jogged my memory that I think we still
> lack p2p support with IOMMUFD, ie. IOMMU mapping of device MMIO.  It
> seemed like there was something else too, but I don't recall without
> some research.
> 
> Ideally we'd have feature parity and libvirt could simply use the
> native IOMMUFD interface whenever both the kernel and QEMU support it.
> 
> Without that parity, when does libvirt decide to use IOMMUFD?
> 
> How would libvirt know if some future IOMMUFD does have parity?
> 
> Does the XML direct this through some new interpretation of the driver
> field? ex. "vfio-container" vs "vfio-iommufd" where "vfio" becomes an
> alias or priority preference.  Thanks,

Right now a host device would have


  <hostdev mode='subsystem' type='mdev' model='vfio-pci'>
   ...
  </hostdev>

where model could also accept 'vfio-ccw' / 'vfio-ap' on s390x IIUC.

If the use of IOMMUFD has guest ABI feature differences, then we
would need to treat this as a new device model in libvirt, ie add
vfio-iommu-pci model.   Does thos iommufd work with vfio-ccw / vfio-ap
too ? If so we'd need new models for those too in libvirt.

The downside of this is that it means no appication is going to
use iommufd mode without having explicit coding done to make it
aware of the new model in libvirt.

If we /want/ apps to move over to iommufd approach in a finite
short timeframe then IMHO achieving feature parity is critical
as feature partiy would let libvirt switch over to it automatically
and avoid the pain of updating any apps. This would be my preference,
as exposing the iommufd concept to apps feels wrong - this is an
internal impl detail ideally. Again we must have feature parity
for this to work though.


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]