qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges


From: Alex Williamson
Subject: Re: [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges
Date: Fri, 3 Mar 2023 12:40:25 -0700

On Fri, 3 Mar 2023 19:14:50 +0000
Joao Martins <joao.m.martins@oracle.com> wrote:

> On 03/03/2023 17:05, Alex Williamson wrote:
> > On Fri, 3 Mar 2023 16:58:55 +0000
> > Joao Martins <joao.m.martins@oracle.com> wrote:
> >   
> >> On 03/03/2023 00:19, Joao Martins wrote:  
> >>> On 02/03/2023 18:42, Alex Williamson wrote:    
> >>>> On Thu, 2 Mar 2023 00:07:35 +0000
> >>>> Joao Martins <joao.m.martins@oracle.com> wrote:    
> >>>>> @@ -426,6 +427,11 @@ void vfio_unblock_multiple_devices_migration(void)
> >>>>>      multiple_devices_migration_blocker = NULL;
> >>>>>  }
> >>>>>
> >>>>> +static bool vfio_have_giommu(VFIOContainer *container)
> >>>>> +{
> >>>>> +    return !QLIST_EMPTY(&container->giommu_list);
> >>>>> +}    
> >>>>
> >>>> I think it's the case, but can you confirm we build the giommu_list
> >>>> regardless of whether the vIOMMU is actually enabled?
> >>>>    
> >>> I think that is only non-empty when we have the first IOVA mappings e.g. 
> >>> on
> >>> IOMMU passthrough mode *I think* it's empty. Let me confirm.
> >>>     
> >> Yeap, it's empty.
> >>  
> >>> Otherwise I'll have to find a TYPE_IOMMU_MEMORY_REGION object to 
> >>> determine if
> >>> the VM was configured with a vIOMMU or not. That is to create the LM 
> >>> blocker.
> >>>     
> >> I am trying this way, with something like this, but neither
> >> x86_iommu_get_default() nor below is really working out yet. A little 
> >> afraid of
> >> having to add the live migration blocker on each machine_init_done hook, 
> >> unless
> >> t here's a more obvious way. vfio_realize should be at a much later stage, 
> >> so I
> >> am surprised how an IOMMU object doesn't exist at that time.  
> > 
> > Can we just test whether the container address space is system_memory?  
> 
> IIUC, it doesn't work (see below snippet).
> 
> The problem is that you start as a regular VFIO guest, and when the guest boot
> is when new mappings get established/invalidated and propagated into listeners
> (vfio_listener_region_add) and they morph into having a giommu. And that's 
> when
> you can figure out in higher layers that 'you have a vIOMMU' as that's when 
> the
> address space gets changed? That is without being specific to a particular 
> IOMMU
> model. Maybe region_add is where to add, but then it then depends on the 
> guest.

This doesn't seem right to me, look for instance at
pci_device_iommu_address_space() which returns address_space_memory
when there is no vIOMMU.  If devices share an address space, they can
share a container.  When a vIOMMU is present (not even enabled), each
device gets it's own container due to the fact that it's in its own
address space (modulo devices within the same address space due to
aliasing).  Thanks,

Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]