qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support


From: Jason Wang
Subject: Re: [PATCH] vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support
Date: Wed, 10 Feb 2021 12:05:33 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0


On 2021/2/9 上午2:26, Peter Xu wrote:
Kevin,

On Mon, Feb 08, 2021 at 07:03:08AM +0000, Tian, Kevin wrote:
It really depends on the definition of dev-iotlb in this context. To me the
fact that virtio-iommu needs to notify the kernel for updating split cache
is already sort of dev-iotlb semantics, regardless of whether it's delivered
through a iotlb message or dev-iotlb message in a specific implementation. 😊
Yeah maybe it turns out that we'll just need to implement dev-iotlb for
virtio-iommu.


Note that on top of device-IOTLB, device may choose to implement an IOMMU which support #PF. In this case, dev-iotlb semantic is not a must. (Or it can co-operate with things like ATS if driver wants)

Virtio will probably provide this feature in the future.

Thanks



I am completely fine with that and I'm never against it. :) I was throwing out
a pure question only, because I don't know the answer.

My question was majorly based on the fact that dev-iotlb and iotlb messages
really look the same; it's not obvious then whether it would always matter a
lot when in a full emulation environment.

One example is current vhost - vhost previously would work without dev-iotlb
(ats=on) because trapping UNMAP would work too for vhost to work.  It's also
simply because at least for VT-d the driver needs to send both one dev-iotlb
and one (probably same) iotlb message for a single page invalidation.  The
dev-iotlb won't help a lot in full emulation here but instead it slows thing
down a little bit (QEMU has full knowledge as long as it receives either of the
message).

Thanks,





reply via email to

[Prev in Thread] Current Thread [Next in Thread]