qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v1] virtio-mem: fix division by zero in virtio_mem_activate_memsl


From: David Hildenbrand
Subject: [PATCH v1] virtio-mem: fix division by zero in virtio_mem_activate_memslots_to_plug()
Date: Mon, 23 Oct 2023 13:13:41 +0200

When running with "dynamic-memslots=off", we enter
virtio_mem_activate_memslots_to_plug() to return immediately again
because "vmem->dynamic_memslots == false". However, the compiler might
not optimize out calculating start_idx+end_idx, where we divide by
vmem->memslot_size. In such a configuration, the memslot size is 0 and
we'll get a division by zero:

    (qemu) qom-set vmem0 requested-size 3G
    (qemu) q35.sh: line 38: 622940 Floating point exception(core dumped)

The same is true for virtio_mem_deactivate_unplugged_memslots(), however
we never really reach that code without a prior
virtio_mem_activate_memslots_to_plug() call.

Let's fix it by simply calling these functions only with
"dynamic-memslots=on".

This was found when using a debug build of QEMU.

Reprted-by: Mario Casquero <mcasquer@redhat.com>
Fixes: 177f9b1ee464 ("virtio-mem: Expose device memory dynamically via multiple 
memslots if enabled")
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 hw/virtio/virtio-mem.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
index 9dc3c61b5a..be4b0b364f 100644
--- a/hw/virtio/virtio-mem.c
+++ b/hw/virtio/virtio-mem.c
@@ -525,9 +525,7 @@ static void virtio_mem_activate_memslots_to_plug(VirtIOMEM 
*vmem,
                                  vmem->memslot_size;
     unsigned int idx;
 
-    if (!vmem->dynamic_memslots) {
-        return;
-    }
+    assert(vmem->dynamic_memslots);
 
     /* Activate all involved memslots in a single transaction. */
     memory_region_transaction_begin();
@@ -547,9 +545,7 @@ static void 
virtio_mem_deactivate_unplugged_memslots(VirtIOMEM *vmem,
                                  vmem->memslot_size;
     unsigned int idx;
 
-    if (!vmem->dynamic_memslots) {
-        return;
-    }
+    assert(vmem->dynamic_memslots);
 
     /* Deactivate all memslots with unplugged blocks in a single transaction. 
*/
     memory_region_transaction_begin();
@@ -598,7 +594,9 @@ static int virtio_mem_set_block_state(VirtIOMEM *vmem, 
uint64_t start_gpa,
         virtio_mem_notify_unplug(vmem, offset, size);
         virtio_mem_set_range_unplugged(vmem, start_gpa, size);
         /* Deactivate completely unplugged memslots after updating the state. 
*/
-        virtio_mem_deactivate_unplugged_memslots(vmem, offset, size);
+        if (vmem->dynamic_memslots) {
+            virtio_mem_deactivate_unplugged_memslots(vmem, offset, size);
+        }
         return 0;
     }
 
@@ -635,9 +633,11 @@ static int virtio_mem_set_block_state(VirtIOMEM *vmem, 
uint64_t start_gpa,
          * blocks we are plugging here. The following notification will inform
          * registered listeners about the blocks we're plugging.
          */
-        virtio_mem_activate_memslots_to_plug(vmem, offset, size);
+        if (vmem->dynamic_memslots) {
+            virtio_mem_activate_memslots_to_plug(vmem, offset, size);
+        }
         ret = virtio_mem_notify_plug(vmem, offset, size);
-        if (ret) {
+        if (ret && vmem->dynamic_memslots) {
             virtio_mem_deactivate_unplugged_memslots(vmem, offset, size);
         }
     }
@@ -749,7 +749,9 @@ static int virtio_mem_unplug_all(VirtIOMEM *vmem)
         notifier_list_notify(&vmem->size_change_notifiers, &vmem->size);
 
         /* Deactivate all memslots after updating the state. */
-        virtio_mem_deactivate_unplugged_memslots(vmem, 0, region_size);
+        if (vmem->dynamic_memslots) {
+            virtio_mem_deactivate_unplugged_memslots(vmem, 0, region_size);
+        }
     }
 
     trace_virtio_mem_unplugged_all();
-- 
2.41.0




reply via email to

[Prev in Thread] Current Thread [Next in Thread]