qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH for-8.0] block/export: fix assume_graph_lock() assertion failure


From: Stefan Hajnoczi
Subject: [PATCH for-8.0] block/export: fix assume_graph_lock() assertion failure
Date: Mon, 27 Mar 2023 17:19:21 -0400

When I/O request parameters are validated for virtio-blk exports like
vhost-user-blk and vduse-blk, we call blk_get_geometry() from a
coroutine. This hits an assume_graph_lock() assertion failure.

Use blk_co_nb_sectors() instead and mark virtio_blk_sect_range_ok() with
coroutine_fn.

This assertion failure is triggered by any I/O to a vhost-user-blk
export.

Fixes: 8ab8140a04cf ("block: Mark bdrv_co_refresh_total_sectors() and callers 
GRAPH_RDLOCK")
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/export/virtio-blk-handler.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/block/export/virtio-blk-handler.c 
b/block/export/virtio-blk-handler.c
index 313666e8ab..2f729a9ce2 100644
--- a/block/export/virtio-blk-handler.c
+++ b/block/export/virtio-blk-handler.c
@@ -22,8 +22,9 @@ struct virtio_blk_inhdr {
     unsigned char status;
 };
 
-static bool virtio_blk_sect_range_ok(BlockBackend *blk, uint32_t block_size,
-                                     uint64_t sector, size_t size)
+static bool coroutine_fn
+virtio_blk_sect_range_ok(BlockBackend *blk, uint32_t block_size,
+                         uint64_t sector, size_t size)
 {
     uint64_t nb_sectors;
     uint64_t total_sectors;
@@ -41,7 +42,7 @@ static bool virtio_blk_sect_range_ok(BlockBackend *blk, 
uint32_t block_size,
     if ((sector << VIRTIO_BLK_SECTOR_BITS) % block_size) {
         return false;
     }
-    blk_get_geometry(blk, &total_sectors);
+    total_sectors = blk_co_nb_sectors(blk);
     if (sector > total_sectors || nb_sectors > total_sectors - sector) {
         return false;
     }
-- 
2.39.2




reply via email to

[Prev in Thread] Current Thread [Next in Thread]