qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/5] block/io: fix bdrv_co_do_pwrite_zeroes head calculation


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH 4/5] block/io: fix bdrv_co_do_pwrite_zeroes head calculation
Date: Tue, 24 Mar 2020 12:22:46 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1

14.03.2020 0:47, Eric Blake wrote:
On 3/2/20 4:05 AM, Vladimir Sementsov-Ogievskiy wrote:
It's wrong to update head using num in this place, as num may be
reduced during the iteration, and we'll have wrong head value on next
iteration.

Instead update head at iteration end.

Cc: address@hidden
Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
---
  block/io.c | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)

Offhand, I don't see how this fixes any bug....
/me reads on


diff --git a/block/io.c b/block/io.c
index 75fd5600c2..c64566b4cf 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1785,7 +1785,6 @@ static int coroutine_fn 
bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
               * convenience, limit this request to max_transfer even if
               * we don't need to fall back to writes.  */
              num = MIN(MIN(bytes, max_transfer), alignment - head);
-            head = (head + num) % alignment;
              assert(num < max_write_zeroes);

Here, we've asserted that if head was non-zero, num was already smaller than 
max_write_zeroes.  The rest of the loop does indeed have code that appears like 
it can reduce num, but that code is guarded:

         /* limit request size */
         if (num > max_write_zeroes) {
             num = max_write_zeroes;
         }
...
         if (ret == -ENOTSUP && !(flags & BDRV_REQ_NO_FALLBACK)) {
             /* Fall back to bounce buffer if write zeroes is unsupported */
             BdrvRequestFlags write_flags = flags & ~BDRV_REQ_ZERO_WRITE;

             if ((flags & BDRV_REQ_FUA) &&
                 !(bs->supported_write_flags & BDRV_REQ_FUA)) {
                 /* No need for bdrv_driver_pwrite() to do a fallback
                  * flush on each chunk; use just one at the end */
                 write_flags &= ~BDRV_REQ_FUA;
                 need_flush = true;
             }
             num = MIN(num, max_transfer);

Now I think that this is impossible. If we updated head above, than num is 
already less or equal to max_transfer, so is not updated by this line. Or do I 
now miss something?

So, the patch may be good as refactoring but not really needed for 5.0.


Oh. Now I see.  If max_write_zeroes is > max_transfer, but we fall back to a 
bounce buffer, it is indeed possible that a misaligned request that forces 
fallbacks to writes may indeed require more than one write to get to the point 
where it is then using a buffer aligned to max_write_zeroes.

Do we have an iotest provoking this, or is it theoretical?  With an iotest, 
this one is material for 5.0 even if the rest of the series misses soft freeze.

          } else if (tail && num > alignment) {
              /* Shorten the request to the last aligned sector.  */
@@ -1844,6 +1843,9 @@ static int coroutine_fn 
bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
          offset += num;
          bytes -= num;
+        if (head) {
+            head = offset % alignment;
+        }

Reviewed-by: Eric Blake <address@hidden>



--
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]