qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 2/3] io: Add zerocopy and errqueue


From: Leonardo Bras Soares Passos
Subject: Re: [PATCH v1 2/3] io: Add zerocopy and errqueue
Date: Thu, 2 Sep 2021 07:19:58 -0300

On Thu, Sep 2, 2021 at 6:50 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Thu, Sep 02, 2021 at 06:34:01AM -0300, Leonardo Bras Soares Passos wrote:
> > On Thu, Sep 2, 2021 at 5:47 AM Daniel P. Berrangé <berrange@redhat.com> 
> > wrote:
> > >
> > > On Thu, Sep 02, 2021 at 03:38:11AM -0300, Leonardo Bras Soares Passos 
> > > wrote:
> > >
> > > > > I would suggest checkig in close(), but as mentioned
> > > > > earlier, I think the design is flawed because the caller
> > > > > fundamentally needs to know about completion for every
> > > > > single write they make in order to know when the buffer
> > > > > can be released / reused.
> > > >
> > > > Well, there could be a flush mechanism (maybe in io_sync_errck(),
> > > > activated with a
> > > > parameter flag, or on a different method if callback is preferred):
> > > > In the MSG_ZEROCOPY docs, we can see that the example includes using a 
> > > > poll()
> > > > syscall after each packet sent, and this means the fd gets a signal 
> > > > after each
> > > > sendmsg() happens, with error or not.
> > > >
> > > > We could harness this with a poll() and a relatively high timeout:
> > > > - We stop sending packets, and then call poll().
> > > > - Whenever poll() returns 0, it means a timeout happened, and so it
> > > > took too long
> > > > without sendmsg() happening, meaning all the packets are sent.
> > > > - If it returns anything else, we go back to fixing the errors found 
> > > > (re-send)
> > > >
> > > > The problem may be defining the value of this timeout, but it could be
> > > > called only
> > > > when zerocopy is active.
> > >
> > > Maybe we need to check completions at the end of each iteration of the
> > > migration dirtypage loop ?
> >
> > Sorry, I am really new to this, and I still couldn't understand why would we
> > need to check at the end of each iteration, instead of doing a full check 
> > at the
> > end.
>
> The end of each iteration is an implicit synchronization point in the
> current migration code.
>
> For example, we might do 2 iterations of migration pre-copy, and then
> switch to post-copy mode. If the data from those 2 iterations hasn't
> been sent at the point we switch to post-copy, that is a semantic
> change from current behaviour. I don't know if that will have an
> problematic effect on the migration process, or not. Checking the
> async completions at the end of each iteration though, would ensure
> the semantics similar to current semantics, reducing risk of unexpected
> problems.
>

What if we do the 'flush()' before we start post-copy, instead of after each
iteration? would that be enough?

>
> > > > or something like this, if we want it to stick with zerocopy if
> > > > setting it off fails.
> > > > if (ret >= 0) {
> > > >     sioc->zerocopy_enabled = enabled;
> > > > }
> > >
> > > Yes, that is a bug fix we need, but actually I was refering
> > > to the later sendmsg() call. Surely we need to clear MSG_ZEROCOPY
> > > from 'flags', if zerocopy_enabled is not set, to avoid EINVAL
> > > from sendmsg.
> >
> > Agree, something like io_writev(,, sioc->zerocopy_enabled ?
> > MSG_ZEROCOPY : 0, errp)'
> > should do, right?
> > (or an io_async_writev(), that will fall_back to io_writev() if
> > zerocopy is disabled)
>
> Something like that - depends what API we end up deciding on

Great :)

>
> > > > We could implement KTLS as io_async_writev() for Channel_TLS, and 
> > > > change this
> > > > flag to async_enabled. If KTLS is not available, it would fall back to
> > > > using gnuTLS on io_writev, just like it would happen in zerocopy.
> > >
> > > If gnutls is going to support KTLS in a way we can use, I dont want to
> > > have any of that duplicated in QEMU code. I want to be able delegate
> > > as much as possible to gnutls, at least so that QEMU isn't in the loop
> > > for any crypto sensitive parts, as that complicates certification
> > > of crypto.
> >
> > Yeah, that's a very good argument :)
> > I think it's probably the case to move from the current callback 
> > implementation
> > to the implementation in which we give gnuTLS the file descriptor.
>
> That would introduce a coupling  between the two QIOChannel impls
> though, which is avoided so far, because we didn't want an assuption
> that a QIOChannel == a FD.

Interesting, QIOChannelTLS would necessarily need to have a QIOChannelSocket,
is that right?

Maybe we can get a QIOChannelKTLS, which would use gnuTLS but get to
have it's own fd,
but that possibly would cause a lot of duplicated code.
On the other hand, this way the QIOChannelTLS would still be able to
accept any other
channel, if desired.

Anyway, it's a complicated decision :)

> Regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
>

Thank you, I am learning a lot today :)

Best regards,
Leonardo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]