qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Can not set high msize with virtio-9p (Was: Re: virtiofs vs 9p perfo


From: Dominique Martinet
Subject: Re: Can not set high msize with virtio-9p (Was: Re: virtiofs vs 9p performance)
Date: Thu, 25 Feb 2021 00:43:57 +0900
User-agent: Mutt/1.10.1 (2018-07-13)

Christian Schoenebeck wrote on Wed, Feb 24, 2021 at 04:16:52PM +0100:
> Misapprehension + typo(s) in my previous message, sorry Michael. That's 500k 
> of course (not 5k), yes.
> 
> Let me rephrase that question: are you aware of something in virtio that 
> would 
> per se mandate an absolute hard coded message size limit (e.g. from virtio 
> specs perspective or maybe some compatibility issue)?
> 
> If not, we would try getting rid of that hard coded limit of the 9p client on 
> kernel side in the first place, because the kernel's 9p client already has a 
> dynamic runtime option 'msize' and that hard coded enforced limit (500k) is a 
> performance bottleneck like I said.

We could probably set it at init time through virtio_max_dma_size(vdev)
like virtio_blk does (I just tried and get 2^64 so we can probably
expect virtually no limit there)

I'm not too familiar with virtio, feel free to try and if it works send
me a patch -- the size drop from 512 to 500k is old enough that things
probably have changed in the background since then.


On the 9p side itself, unrelated to virtio, we don't want to make it
*too* big as the client code doesn't use any scatter-gather and will
want to allocate upfront contiguous buffers of the size that got
negotiated -- that can get ugly quite fast, but we can leave it up to
users to decide.
One of my very-long-term goal would be to tend to that, if someone has
cycles to work on it I'd gladly review any patch in that area.
A possible implementation path would be to have transport define
themselves if they support it or not and handle it accordingly until all
transports migrated, so one wouldn't need to care about e.g. rdma or xen
if you don't have hardware to test in the short term.

The next best thing would be David's netfs helpers and sending
concurrent requests if you use cache, but that's not merged yet either
so it'll be a few cycles as well.


Cheers,
-- 
Dominique



reply via email to

[Prev in Thread] Current Thread [Next in Thread]