qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 2/3] io: Add zerocopy and errqueue


From: Daniel P . Berrangé
Subject: Re: [PATCH v1 2/3] io: Add zerocopy and errqueue
Date: Thu, 2 Sep 2021 11:28:15 +0100
User-agent: Mutt/2.0.7 (2021-05-04)

On Thu, Sep 02, 2021 at 07:19:58AM -0300, Leonardo Bras Soares Passos wrote:
> On Thu, Sep 2, 2021 at 6:50 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
> >
> > On Thu, Sep 02, 2021 at 06:34:01AM -0300, Leonardo Bras Soares Passos wrote:
> > > On Thu, Sep 2, 2021 at 5:47 AM Daniel P. Berrangé <berrange@redhat.com> 
> > > wrote:
> > > >
> > > > On Thu, Sep 02, 2021 at 03:38:11AM -0300, Leonardo Bras Soares Passos 
> > > > wrote:
> > > >
> > > > > > I would suggest checkig in close(), but as mentioned
> > > > > > earlier, I think the design is flawed because the caller
> > > > > > fundamentally needs to know about completion for every
> > > > > > single write they make in order to know when the buffer
> > > > > > can be released / reused.
> > > > >
> > > > > Well, there could be a flush mechanism (maybe in io_sync_errck(),
> > > > > activated with a
> > > > > parameter flag, or on a different method if callback is preferred):
> > > > > In the MSG_ZEROCOPY docs, we can see that the example includes using 
> > > > > a poll()
> > > > > syscall after each packet sent, and this means the fd gets a signal 
> > > > > after each
> > > > > sendmsg() happens, with error or not.
> > > > >
> > > > > We could harness this with a poll() and a relatively high timeout:
> > > > > - We stop sending packets, and then call poll().
> > > > > - Whenever poll() returns 0, it means a timeout happened, and so it
> > > > > took too long
> > > > > without sendmsg() happening, meaning all the packets are sent.
> > > > > - If it returns anything else, we go back to fixing the errors found 
> > > > > (re-send)
> > > > >
> > > > > The problem may be defining the value of this timeout, but it could be
> > > > > called only
> > > > > when zerocopy is active.
> > > >
> > > > Maybe we need to check completions at the end of each iteration of the
> > > > migration dirtypage loop ?
> > >
> > > Sorry, I am really new to this, and I still couldn't understand why would 
> > > we
> > > need to check at the end of each iteration, instead of doing a full check 
> > > at the
> > > end.
> >
> > The end of each iteration is an implicit synchronization point in the
> > current migration code.
> >
> > For example, we might do 2 iterations of migration pre-copy, and then
> > switch to post-copy mode. If the data from those 2 iterations hasn't
> > been sent at the point we switch to post-copy, that is a semantic
> > change from current behaviour. I don't know if that will have an
> > problematic effect on the migration process, or not. Checking the
> > async completions at the end of each iteration though, would ensure
> > the semantics similar to current semantics, reducing risk of unexpected
> > problems.
> >
> 
> What if we do the 'flush()' before we start post-copy, instead of after each
> iteration? would that be enough?

Possibly, yes. This really need David G's input since he understands
the code in way more detail than me.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]