qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/4] softmmu: Support concurrent bounce buffers


From: Mattias Nissler
Subject: Re: [PATCH v2 1/4] softmmu: Support concurrent bounce buffers
Date: Tue, 5 Sep 2023 09:38:39 +0200

On Fri, Sep 1, 2023 at 3:41 PM Markus Armbruster <armbru@redhat.com> wrote:
>
> Stefan Hajnoczi <stefanha@redhat.com> writes:
>
> > On Wed, Aug 23, 2023 at 04:54:06PM -0400, Peter Xu wrote:
> >> On Wed, Aug 23, 2023 at 10:08:08PM +0200, Mattias Nissler wrote:
> >> > On Wed, Aug 23, 2023 at 7:35 PM Peter Xu <peterx@redhat.com> wrote:
> >> > > On Wed, Aug 23, 2023 at 02:29:02AM -0700, Mattias Nissler wrote:
> >> > > > diff --git a/softmmu/vl.c b/softmmu/vl.c
> >> > > > index b0b96f67fa..dbe52f5ea1 100644
> >> > > > --- a/softmmu/vl.c
> >> > > > +++ b/softmmu/vl.c
> >> > > > @@ -3469,6 +3469,12 @@ void qemu_init(int argc, char **argv)
> >> > > >                  exit(1);
> >> > > >  #endif
> >> > > >                  break;
> >> > > > +            case QEMU_OPTION_max_bounce_buffer_size:
> >> > > > +                if (qemu_strtosz(optarg, NULL, 
> >> > > > &max_bounce_buffer_size) < 0) {
> >> > > > +                    error_report("invalid -max-ounce-buffer-size 
> >> > > > value");
> >> > > > +                    exit(1);
> >> > > > +                }
> >> > > > +                break;
> >> > >
> >> > > PS: I had a vague memory that we do not recommend adding more qemu 
> >> > > cmdline
> >> > > options, but I don't know enough on the plan to say anything real.
> >> >
> >> > I am aware of that, and I'm really not happy with the command line
> >> > option myself. Consider the command line flag a straw man I put in to
> >> > see whether any reviewers have better ideas :)
> >> >
> >> > More seriously, I actually did look around to see whether I can add
> >> > the parameter to one of the existing option groupings somewhere, but
> >> > neither do I have a suitable QOM object that I can attach the
> >> > parameter to, nor did I find any global option groups that fits: this
> >> > is not really memory configuration, and it's not really CPU
> >> > configuration, it's more related to shared device model
> >> > infrastructure... If you have a good idea for a home for this, I'm all
> >> > ears.
> >>
> >> No good & simple suggestion here, sorry.  We can keep the option there
> >> until someone jumps in, then the better alternative could also come along.
> >>
> >> After all I expect if we can choose a sensible enough default value, this
> >> new option shouldn't be used by anyone for real.
> >
> > QEMU commits to stability in its external interfaces. Once the
> > command-line option is added, it needs to be supported in the future or
> > go through the deprecation process. I think we should agree on the
> > command-line option now.
> >
> > Two ways to avoid the issue:
> > 1. Drop the command-line option until someone needs it.
>
> Avoiding unneeded configuration knobs is always good.
>
> > 2. Make it an experimental option (with an "x-" prefix).
>
> Fine if actual experiments are planned.
>
> Also fine if it's a development or debugging aid.

To a certain extent it is: I've been playing with different device
models and bumping the parameter until their DMA requests stopped
failing.

>
> > The closest to a proper solution that I found was adding it as a
> > -machine property. What bothers me is that if QEMU supports
> > multi-machine emulation in a single process some day, then the property
> > doesn't make sense since it's global rather than specific to a machine.
> >
> > CCing Markus Armbruster for ideas.
>
> I'm afraid I'm lacking context.  Glancing at the patch...
>
>     ``-max-bounce-buffer-size size``
>         Set the limit in bytes for DMA bounce buffer allocations.
>
>         DMA bounce buffers are used when device models request memory-mapped 
> access
>         to memory regions that can't be directly mapped by the qemu process, 
> so the
>         memory must read or written to a temporary local buffer for the device
>         model to work with. This is the case e.g. for I/O memory regions, and 
> when
>         running in multi-process mode without shared access to memory.
>
>         Whether bounce buffering is necessary depends heavily on the device 
> model
>         implementation. Some devices use explicit DMA read and write 
> operations
>         which do not require bounce buffers. Some devices, notably storage, 
> will
>         retry a failed DMA map request after bounce buffer space becomes 
> available
>         again. Most other devices will bail when encountering map request 
> failures,
>         which will typically appear to the guest as a hardware error.
>
>         Suitable bounce buffer size values depend on the workload and guest
>         configuration. A few kilobytes up to a few megabytes are common sizes
>         encountered in practice.
>
> Sounds quite device-specific.  Why isn't this configured per device?

It would be nice to use a property on the device that originates the
DMA operation to configure this. However, I don't see how to do this
in a reasonable way without bigger changes: A typical call path is
pci_dma_map -> dma_memory_map -> address_space_map. While pci_dma_map
has a PCIDevice*, address_space_map only receives the AddressSpace*.
So, we'd probably have to pass through a new QObject parameter to
address_space_map that indicates the originator and pass that through?
Or is there a better alternative to supply context information to
address_space map? Let me know if any of these approaches sound
appropriate and I'll be happy to explore them further.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]