[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 4/4] vl: Prioritize realizations of devices
From: |
Igor Mammedov |
Subject: |
Re: [PATCH 4/4] vl: Prioritize realizations of devices |
Date: |
Thu, 2 Sep 2021 09:46:04 +0200 |
On Thu, 26 Aug 2021 09:43:59 -0400
Peter Xu <peterx@redhat.com> wrote:
> On Thu, Aug 26, 2021 at 01:36:29PM +0200, Igor Mammedov wrote:
> > On Thu, 26 Aug 2021 10:01:01 +0200
> > Markus Armbruster <armbru@redhat.com> wrote:
> >
> > > Peter Xu <peterx@redhat.com> writes:
> > >
> > > > On Wed, Aug 25, 2021 at 05:50:23PM -0400, Peter Xu wrote:
> > > >> On Wed, Aug 25, 2021 at 02:28:55PM +0200, Markus Armbruster wrote:
> > > >> > Having thought about this a bit more...
> > ...
> > > > Any further thoughts will be greatly welcomed.
> > >
> > > I'd like to propose a more modest solution: detect the problem and fail.
> > That's or proper dependency tracking (which stands chance to work with QMP,
> > but like it was said it's complex)
> >
> > > A simple state machine can track "has IOMMU" state. It has three states
> > > "no so far", "yes", and "no", and two events "add IOMMU" and "add device
> > > that needs to know". State diagram:
> > >
> > > no so far
> > > +--- (start state) ---+
> > > | |
> > > add IOMMU | | add device that
> > > | | needs to know
> > > v v
> > > +--> yes no <--+
> > > | | add device that | |
> > > +-----+ needs to know +-----+
> > >
> > > "Add IOMMU" in state "no" is an error.
> >
> > question is how we distinguish "device that needs to know"
> > from device that doesn't need to know, and then recently
> > added feature 'bypass IOMMU' adds more fun to this.
>
> Maybe we can start from "no device needs to know"? Then add more into it when
> the list expands.
>
> vfio-pci should be a natural fit because we're sure it won't break any valid
> old configurations. However we may need to be careful on adding more devices,
> e.g. when some old configuration might work on old binaries, but I'm not sure.
> Off-topic: I'm wondering whether bypass_iommu is just a work-around of not
> using iommu=pt in the guest cmdlines? Is there any performance comparison of
> using bypass_iommu against having iommu but with iommu=pt? At least intel
> iommu has pt enabled explicitly, i.e. it shouldn't even need a shadow iommu
> pgtable in the guest but only a single bit in device context entry showing
> that
> "this device wants to pass-through iommu", so I would expect the perf can be
> similar to explicitly bypass iommu in the qemu cmdline.
They wanted to have a mix of iommu and non-iommu devices in VM
(last merged revision was: [PATCH v5 0/9] IOMMU: Add support for IOMMU Bypass
Feature)
But 'why' reasoning was lost somewhere, CCing author.
>
> Thanks,
>
- Re: [PATCH 4/4] vl: Prioritize realizations of devices,
Igor Mammedov <=
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Igor Mammedov, 2021/09/02
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Peter Xu, 2021/09/02
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Daniel P . Berrangé, 2021/09/02
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Peter Xu, 2021/09/02
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Daniel P . Berrangé, 2021/09/02
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Markus Armbruster, 2021/09/03
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Peter Xu, 2021/09/03
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Igor Mammedov, 2021/09/03
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Peter Xu, 2021/09/03
- Re: [PATCH 4/4] vl: Prioritize realizations of devices, Igor Mammedov, 2021/09/06