qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v2 3/3] vfio: Avoid disabling and enabling vectors repeat


From: Alex Williamson
Subject: Re: [RFC PATCH v2 3/3] vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration
Date: Wed, 27 Jan 2021 07:21:31 -0700

On Wed, 27 Jan 2021 19:27:35 +0800
Shenming Lu <lushenming@huawei.com> wrote:

> On 2021/1/27 5:36, Alex Williamson wrote:
> > On Wed, 9 Dec 2020 16:09:19 +0800
> > Shenming Lu <lushenming@huawei.com> wrote:
> >   
> >> Different from the normal situation when the guest starts, we can
> >> know the max unmasked vetctor (at the beginning) after msix_load()
> >> in VFIO migration. So in order to avoid ineffectively disabling and  
> > 
> > s/ineffectively/inefficiently/?  It's "effective" either way I think.  
> 
> Yeah, I should say "inefficiently". :-)
> 
> >   
> >> enabling vectors repeatedly, let's allocate all needed vectors first
> >> and then enable these unmasked vectors one by one without disabling.
> >>
> >> Signed-off-by: Shenming Lu <lushenming@huawei.com>
> >> ---
> >>  hw/pci/msix.c         | 17 +++++++++++++++++
> >>  hw/vfio/pci.c         | 10 ++++++++--
> >>  include/hw/pci/msix.h |  2 ++
> >>  3 files changed, 27 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/hw/pci/msix.c b/hw/pci/msix.c
> >> index 67e34f34d6..bf291d3ff8 100644
> >> --- a/hw/pci/msix.c
> >> +++ b/hw/pci/msix.c
> >> @@ -557,6 +557,23 @@ unsigned int msix_nr_vectors_allocated(const 
> >> PCIDevice *dev)
> >>      return dev->msix_entries_nr;
> >>  }
> >>  
> >> +int msix_get_max_unmasked_vector(PCIDevice *dev)
> >> +{
> >> +    int max_unmasked_vector = -1;
> >> +    int vector;
> >> +
> >> +    if ((dev->config[dev->msix_cap + MSIX_CONTROL_OFFSET] &
> >> +        (MSIX_ENABLE_MASK | MSIX_MASKALL_MASK)) == MSIX_ENABLE_MASK) {
> >> +        for (vector = 0; vector < dev->msix_entries_nr; vector++) {
> >> +            if (!msix_is_masked(dev, vector)) {
> >> +                max_unmasked_vector = vector;
> >> +            }
> >> +        }
> >> +    }
> >> +
> >> +    return max_unmasked_vector;
> >> +}  
> > 
> > Comments from QEMU PCI folks?
> >   
> >> +
> >>  static int msix_set_notifier_for_vector(PCIDevice *dev, unsigned int 
> >> vector)
> >>  {
> >>      MSIMessage msg;
> >> diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
> >> index 51dc373695..e755ed2514 100644
> >> --- a/hw/vfio/pci.c
> >> +++ b/hw/vfio/pci.c
> >> @@ -568,6 +568,9 @@ static void vfio_msix_vector_release(PCIDevice *pdev, 
> >> unsigned int nr)
> >>  
> >>  static void vfio_msix_enable(VFIOPCIDevice *vdev)
> >>  {
> >> +    int max_unmasked_vector = msix_get_max_unmasked_vector(&vdev->pdev);
> >> +    unsigned int used_vector = MAX(max_unmasked_vector, 0);
> >> +  
> > 
> > The above PCI function could also be done inline here pretty easily too:
> > 
> > unsigned int nr, max_vec = 0;
> > 
> > if (!msix_masked(&vdev->pdev))
> >     for (nr = 0; nr < msix_nr_vectors_allocated(&vdev->pdev); nr++) {
> >         if (!msix_is_masked(&vdev->pdev, nr)) {
> >             max_vec = nr;
> >         }
> >     }
> > }
> > 
> > It's a bit cleaner than the msix utility function, imo.  
> 
> Yeah, it makes sense.
> 
> >   
> >>      vfio_disable_interrupts(vdev);
> >>  
> >>      vdev->msi_vectors = g_new0(VFIOMSIVector, vdev->msix->entries);
> >> @@ -586,9 +589,12 @@ static void vfio_msix_enable(VFIOPCIDevice *vdev)
> >>       * triggering to userspace, then immediately release the vector, 
> >> leaving
> >>       * the physical device with no vectors enabled, but MSI-X enabled, 
> >> just
> >>       * like the guest view.
> >> +     * If there are unmasked vectors (such as in migration) which will be
> >> +     * enabled soon, we can allocate them here to avoid ineffectively 
> >> disabling
> >> +     * and enabling vectors repeatedly later.  
> > 
> > It just happens that migration triggers this usage model where the
> > MSI-X enable bit is set with vectors unmasked in the vector table, but
> > this is not unique to migration, guests can follow this pattern as well.
> > Has this been tested with a variety of guests?  Logically it seems
> > correct, but always good to prove so.  Thanks,  
> 
> I have tested it in migration and normal guest startup (only the latest 
> Linux).
> And I can try to test with some other kernels, could you be more specific 
> about this?

Minimally also Windows, ideally a BSD as well.  Thanks,

Alex

> >>       */
> >> -    vfio_msix_vector_do_use(&vdev->pdev, 0, NULL, NULL);
> >> -    vfio_msix_vector_release(&vdev->pdev, 0);
> >> +    vfio_msix_vector_do_use(&vdev->pdev, used_vector, NULL, NULL);
> >> +    vfio_msix_vector_release(&vdev->pdev, used_vector);
> >>  
> >>      if (msix_set_vector_notifiers(&vdev->pdev, vfio_msix_vector_use,
> >>                                    vfio_msix_vector_release, NULL)) {
> >> diff --git a/include/hw/pci/msix.h b/include/hw/pci/msix.h
> >> index 4c4a60c739..4bfb463fa6 100644
> >> --- a/include/hw/pci/msix.h
> >> +++ b/include/hw/pci/msix.h
> >> @@ -23,6 +23,8 @@ void msix_uninit_exclusive_bar(PCIDevice *dev);
> >>  
> >>  unsigned int msix_nr_vectors_allocated(const PCIDevice *dev);
> >>  
> >> +int msix_get_max_unmasked_vector(PCIDevice *dev);
> >> +
> >>  void msix_save(PCIDevice *dev, QEMUFile *f);
> >>  void msix_load(PCIDevice *dev, QEMUFile *f);
> >>    
> > 
> > .
> >   
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]