qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 0/5] mptcp support


From: Dr. David Alan Gilbert
Subject: Re: [RFC PATCH 0/5] mptcp support
Date: Wed, 14 Apr 2021 19:49:11 +0100
User-agent: Mutt/2.0.6 (2021-03-06)

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Mon, Apr 12, 2021 at 03:51:10PM +0100, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > On Thu, Apr 08, 2021 at 08:11:54PM +0100, Dr. David Alan Gilbert (git) 
> > > wrote:
> > > > From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> > > > 
> > > > Hi,
> > > >   This RFC set adds support for multipath TCP (mptcp),
> > > > in particular on the migration path - but should be extensible
> > > > to other users.
> > > > 
> > > >   Multipath-tcp is a bit like bonding, but at L3; you can use
> > > > it to handle failure, but can also use it to split traffic across
> > > > multiple interfaces.
> > > > 
> > > >   Using a pair of 10Gb interfaces, I've managed to get 19Gbps
> > > > (with the only tuning being using huge pages and turning the MTU up).
> > > > 
> > > >   It needs a bleeding-edge Linux kernel (in some older ones you get
> > > > false accept messages for the subflows), and a C lib that has the
> > > > constants defined (as current glibc does).
> > > > 
> > > >   To use it you just need to append ,mptcp to an address;
> > > > 
> > > >   -incoming tcp:0:4444,mptcp
> > > >   migrate -d tcp:192.168.11.20:4444,mptcp
> > > 
> > > What happens if you only enable mptcp flag on one side of the
> > > stream (whether client or server), does it degrade to boring
> > > old single path TCP, or does it result in an error ?
> > 
> > I've just tested this and it matches what pabeni said; it seems to just
> > fall back.
> > 
> > > >   I had a quick go at trying NBD as well, but I think it needs
> > > > some work with the parsing of NBD addresses.
> > > 
> > > In theory this is applicable to anywhere that we use sockets.
> > > Anywhere that is configured with the QAPI  SocketAddress /
> > > SocketAddressLegacy type will get it for free AFAICT.
> > 
> > That was my hope.
> > 
> > > Anywhere that is configured via QemuOpts will need an enhancement.
> > > 
> > > IOW, I would think NBD already works if you configure NBD via
> > > QMP with nbd-server-start, or block-export-add.  qemu-nbd will
> > > need cli options added.
> > > 
> > > The block layer clients for NBD, Gluster, Sheepdog and SSH also
> > > all get it for free when configured va QMP, or -blockdev AFAICT
> > 
> > Have you got some examples via QMP?
> > I'd failed trying -drive 
> > if=virtio,file=nbd://192.168.11.20:3333,mptcp=on/zero
> 
> I never remember the mapping to blockdev QAPI schema, especially
> when using legacy filename syntax with the URI.
> 
> Try instead
> 
>  -blockdev driver=nbd,host=192.168.11.20,port=3333,mptcp=on,id=disk0backend
>  -device virtio-blk,drive=disk0backend,id=disk0

That doesn't look like the right syntax, but it got me closer; and it's
working with no more code changes:

On the source:

qemu... -nographic -M none -drive if=none,file=my.qcow2,id=mydisk
(qemu) nbd_server_start 0.0.0.0:3333,mptcp=on
(qemu) nbd_server_add -w mydisk

On the destination:
-blockdev 
driver=nbd,server.type=inet,server.host=192.168.11.20,server.port=3333,server.mptcp=on,node-name=nbddisk,export=mydisk
 -device virtio-blk,drive=nbddisk,id=disk0

and it succesfully booted off it, and it looks like it has two flows.
(It didn't get that great a bandwidth, but I'm not sure where that's due
to).

Dave
> 
> 
> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]