qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v3 1/1] memory: Skip bad range assertion if notifier supports a


From: Jason Wang
Subject: Re: [RFC v3 1/1] memory: Skip bad range assertion if notifier supports arbitrary masks
Date: Wed, 19 Aug 2020 15:15:26 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0


On 2020/8/18 下午10:24, Eugenio Perez Martin wrote:
On Wed, Aug 12, 2020 at 10:49 AM Eugenio Perez Martin
<eperezma@redhat.com>  wrote:
On Wed, Aug 12, 2020 at 4:24 AM Jason Wang<jasowang@redhat.com>  wrote:
On 2020/8/12 上午1:55, Eugenio Pérez wrote:
Signed-off-by: Eugenio Pérez<eperezma@redhat.com>
---
   hw/virtio/vhost.c     |  2 +-
   include/exec/memory.h |  2 ++
   softmmu/memory.c      | 10 ++++++++--
   3 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 1a1384e7a6..e74ad9e09b 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -729,7 +729,7 @@ static void vhost_iommu_region_add(MemoryListener *listener,
       iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr,
                                                      MEMTXATTRS_UNSPECIFIED);
       iommu_notifier_init(&iommu->n, vhost_iommu_unmap_notify,
-                        IOMMU_NOTIFIER_UNMAP,
+                        IOMMU_NOTIFIER_UNMAP | IOMMU_NOTIFIER_IOTLB,
I think we can safely drop IOMMU_NOTIFIER_UNMAP here since device IOTLB
is sufficient.

Btw, IOMMU_NOTIFIER_IOTLB is kind of confusing, maybe something like
IOMMU_NOTIFIER_DEVIOTLB.

Got it, will change.

                           section->offset_within_region,
                           int128_get64(end),
                           iommu_idx);
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 307e527835..4d94c1e984 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -87,6 +87,8 @@ typedef enum {
       IOMMU_NOTIFIER_UNMAP = 0x1,
       /* Notify entry changes (newly created entries) */
       IOMMU_NOTIFIER_MAP = 0x2,
+    /* Notify changes on IOTLB entries */
+    IOMMU_NOTIFIER_IOTLB = 0x04,
   } IOMMUNotifierFlag;

   #define IOMMU_NOTIFIER_ALL (IOMMU_NOTIFIER_MAP | IOMMU_NOTIFIER_UNMAP)
diff --git a/softmmu/memory.c b/softmmu/memory.c
index af25987518..e2c5f6d0e7 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1895,6 +1895,7 @@ void memory_region_notify_one(IOMMUNotifier *notifier,
(we probably need a better name of this function, at least something
like "memory_region_iommu_notify_one").

Ok will change.

   {
       IOMMUNotifierFlag request_flags;
       hwaddr entry_end = entry->iova + entry->addr_mask;
+    IOMMUTLBEntry tmp = *entry;

       /*
        * Skip the notification if the notification does not overlap
@@ -1904,7 +1905,12 @@ void memory_region_notify_one(IOMMUNotifier *notifier,
           return;
       }

-    assert(entry->iova >= notifier->start && entry_end <= notifier->end);
+    if (notifier->notifier_flags & IOMMU_NOTIFIER_IOTLB) {
+        tmp.iova = MAX(tmp.iova, notifier->start);
+        tmp.addr_mask = MIN(entry_end, notifier->end) - tmp.iova;
Any reason for doing such re-calculation here, a comment would be helpful.

It was proposed by Peter, but I understand as limiting the
address+range we pass to the notifier. Although vhost seems to support
it as long as it contains (notifier->start, notifier->end) in range, a
future notifier might not.


Yes, actually, I feel confused after reading the codes. Is notifier->start IOVA or GPA?

In vfio.c, we did:

        iommu_notifier_init(&giommu->n, vfio_iommu_map_notify,
                            IOMMU_NOTIFIER_ALL,
                            section->offset_within_region,
                            int128_get64(llend),
                            iommu_idx);

So it looks to me the start and end are GPA, but the assertion above check it against IOVA which seems to be wrong ....

Thanks



It could be done as iommu_entry_crop(IOMMUTLBEntry *entry, const
IOMMUNotifier *notifier) though.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]