qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 2/5] softmmu: Support concurrent bounce buffers


From: Peter Xu
Subject: Re: [PATCH v3 2/5] softmmu: Support concurrent bounce buffers
Date: Fri, 15 Sep 2023 11:57:31 -0400

On Fri, Sep 15, 2023 at 11:32:31AM +0200, Mattias Nissler wrote:
> > > @@ -3105,7 +3105,8 @@ void address_space_init(AddressSpace *as, 
> > > MemoryRegion *root, const char *name)
> > >      as->ioeventfds = NULL;
> > >      QTAILQ_INIT(&as->listeners);
> > >      QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link);
> > > -    as->bounce.in_use = false;
> > > +    as->max_bounce_buffer_size = 4096;
> >
> > Instead of hard-code this 4k again (besides the pci property), maybe we can
> > have address_space_init_with_bouncebuffer() and always pass it in from pci
> > do_realize?  Then by default no bounce buffer supported for the AS only if
> > requested.
> 
> I haven't verified in a running configuration, but I believe bounce
> buffering is also used with non-PCI code, for example
> sysbus_ahci_realize grabs &address_space_memory. So, IMO it makes
> sense to keep at the old default for non-PCI address spaces, unless we
> create additional knobs to set the limit for these?

Oh okay, in that case do we want to have a macro defining the default for
all (as 4K), then also use the macro in the pci property for the default
value?  Maybe it's slightly better than the hard-coded.

> > > +            /* Write bounce_buffer_size before reading map_client_list. 
> > > */
> > > +            smp_mb();
> >
> > I know it comes from the old code.. but I don't know why this is needed;
> > mutex lock should contain an mb() already before the list iterations later.
> > Just to raise it up, maybe Paolo would like to comment.
> 
> Hm, are you sure that qemu_mutex_lock includes a full memory barrier?

No. :)

> The atomics docs say that pthread_mutex_lock is guaranteed to have
> acquire semantics, but that doesn't guarantee that previous writes are
> visible elsewhere. We need a release of bounce_buffer_size and an
> acquire of map_client_list. The latter is implied by qemu_mutex_lock,
> so I could arguably change this to smp_wmb().

Yeah I think I made such mistake before, sorry.  I think some day I looked
into x86 impl and I believe it was mb() there but I always kept that
impression in mind that it will always be, but not really.  I think you're
right that mutex_lock semantics only guarantees an REQUIRE, and that's not
the same as mb(), at least not always.

Changing it to wmb() makes sense to me but indeed that may be more suitable
for another patch.  Maybe easier to just leave it as-is as it shouldn't be
super hot path anyway.

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]