qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Accelerating non-standard disk types


From: Raphael Norwitz
Subject: Re: Accelerating non-standard disk types
Date: Tue, 31 May 2022 03:06:20 +0000
User-agent: Mutt/1.10.1 (2018-07-13)

On Wed, May 25, 2022 at 05:00:04PM +0100, Stefan Hajnoczi wrote:
> On Thu, May 19, 2022 at 06:39:39PM +0000, Raphael Norwitz wrote:
> > On Tue, May 17, 2022 at 03:53:52PM +0200, Paolo Bonzini wrote:
> > > On 5/16/22 19:38, Raphael Norwitz wrote:
> > > > [1] Keep using the SCSI translation in QEMU but back vDisks with a
> > > > vhost-user-scsi or vhost-user-blk backend device.
> > > > [2] Implement SATA and IDE emulation with vfio-user (likely with an SPDK
> > > > client?).
> > > > [3] We've also been looking at your libblkio library. From your
> > > > description in
> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gnu.org_archive_html_qemu-2Ddevel_2021-2D04_msg06146.html&d=DwICaQ&c=s883GpUCOChKOHiocYtGcg&r=In4gmR1pGzKB8G5p6LUrWqkSMec2L5EtXZow_FZNJZk&m=wBSqcw0cal3wPP87YIKgFgmqMHjGCC3apYf4wCn1SIrX6GW_FR-J9wO68v-cyrpn&s=CP-6ZY-gqgQ2zLAJdR8WVTrMBoqmFHilGvW_qnf2myU&e=
> > > >    it
> > > > sounds like it may definitely play a role here, and possibly provide the
> > > > nessesary abstractions to back I/O from these emulated disks to any
> > > > backends we may want?
> > > 
> > > First of all: have you benchmarked it?  How much time is spent on MMIO vs.
> > > disk I/O?
> > >
> > 
> > Good point - we haven’t benchmarked the emulation, exit and translation
> > overheads - it is very possible speeding up disk I/O may not have a huge
> > impact. We would definitely benchmark this before exploring any of the
> > options seriously, but as you rightly note, performance is not the only
> > motivation here.
> > 
> > > Of the options above, the most interesting to me is to implement a
> > > vhost-user-blk/vhost-user-scsi backend in QEMU, similar to the NVMe one,
> > > that would translate I/O submissions to virtqueue (including polling and 
> > > the
> > > like) and could be used with SATA.
> > >
> > 
> > We were certainly eyeing [1] as the most viable in the immediate future.
> > That said, since a vhost-user-blk driver has been added to libblkio, [3]
> > also sounds like a strong option. Do you see any long term benefit to
> > translating SATA/IDE submissions to virtqueues in a world where libblkio
> > is to be adopted?
> >
> > > For IDE specifically, I'm not sure how much it can be sped up since it has
> > > only 1 in-flight operation.  I think using KVM coalesced I/O could provide
> > > an interesting boost (assuming instant or near-instant reply from the
> > > backend).  If all you're interested in however is not really performance,
> > > but rather having a single "connection" to your back end, vhost-user is
> > > certainly an option.
> > > 
> > 
> > Interesting - I will take a look at KVM coalesced I/O.
> > 
> > You’re totally right though, performance is not our main interest for
> > these disk types. I should have emphasized offload rather than
> > acceleration and performance. We would prefer to QA and support as few
> > data paths as possible, and a vhost-user offload mechanism would allow
> > us to use the same path for all I/O. I imagine other QEMU users who
> > offload to backends like SPDK and use SATA/IDE disk types may feel
> > similarly?
> 
> It's nice to have a single target (e.g. vhost-user-blk in SPDK) that
> handles all disk I/O. On the other hand, QEMU would still have the
> IDE/SATA emulation and libblkio vhost-user-blk driver, so in the end it
> may not reduce the amount of code that you need to support.
> 

Apologies for the late reply - I was on PTO.

For us it’s not so much about the overall LOC we support. We have our
own iSCSI client implementation with embedded business logic which we
use for SCSI disks. Continuing to support SATA and IDE disks without our
implementation has been really troublesome so, even if it means more
LOC, we would really like to unify our data path at least at the iSCSI
layer.

While the overall code may not be reduced so much for many others today,
it may make a significant difference in the future. I can imagine some
QEMU users may want to deprecate (or not implement) iSCSI target support
in favor of NVMe over fabrics and still support these disk types. Being
able to offload the transport layer via vhost-user-blk (either with some
added logic on top of the existing SCSI translation layer or with
libblkio) would make this easy.

Does that sound reasonable?

> Stefan


reply via email to

[Prev in Thread] Current Thread [Next in Thread]