qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] 8da796: iotests.py: Let wait_migration wait e


From: Peter Maydell
Subject: [Qemu-commits] [qemu/qemu] 8da796: iotests.py: Let wait_migration wait even more
Date: Tue, 28 Jan 2020 01:34:25 -0800

  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: 8da7969bd7014f6de037d8ae132b40721944b186
      
https://github.com/qemu/qemu/commit/8da7969bd7014f6de037d8ae132b40721944b186
  Author: Max Reitz <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M tests/qemu-iotests/234
    M tests/qemu-iotests/262
    M tests/qemu-iotests/280
    M tests/qemu-iotests/iotests.py

  Log Message:
  -----------
  iotests.py: Let wait_migration wait even more

The "migration completed" event may be sent (on the source, to be
specific) before the migration is actually completed, so the VM runstate
will still be "finish-migrate" instead of "postmigrate".  So ask the
users of VM.wait_migration() to specify the final runstate they desire
and then poll the VM until it has reached that state.  (This should be
over very quickly, so busy polling is fine.)

Without this patch, I see intermittent failures in the new iotest 280
under high system load.  I have not yet seen such failures with other
iotests that use VM.wait_migration() and query-status afterwards, but
maybe they just occur even more rarely, or it is because they also wait
on the destination VM to be running.

Signed-off-by: Max Reitz <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 9442bebe6e67a5d038bbf2572b79e7b59d202a23
      
https://github.com/qemu/qemu/commit/9442bebe6e67a5d038bbf2572b79e7b59d202a23
  Author: Thomas Huth <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M tests/qemu-iotests/030
    M tests/qemu-iotests/040
    M tests/qemu-iotests/041
    M tests/qemu-iotests/245

  Log Message:
  -----------
  iotests: Add more "skip_if_unsupported" statements to the python tests

The python code already contains a possibility to skip tests if the
corresponding driver is not available in the qemu binary - use it
in more spots to avoid that the tests are failing if the driver has
been disabled.

While we're at it, we can now also remove some of the old checks that
were using iotests.supports_quorum() - and which were apparently not
working as expected since the tests aborted instead of being skipped
when "quorum" was missing in the QEMU binary.

Signed-off-by: Thomas Huth <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 471ded690e19689018535e3f48480507ed073e22
      
https://github.com/qemu/qemu/commit/471ded690e19689018535e3f48480507ed073e22
  Author: Sergio Lopez <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: fix coding style issues in drive_backup_prepare

Fix a couple of minor coding style issues in drive_backup_prepare.

Signed-off-by: Sergio Lopez <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Reviewed-by: Kevin Wolf <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 2288ccfac96281c316db942d10e3f921c1373064
      
https://github.com/qemu/qemu/commit/2288ccfac96281c316db942d10e3f921c1373064
  Author: Sergio Lopez <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M blockdev.c
    M tests/qemu-iotests/141.out
    M tests/qemu-iotests/185.out
    M tests/qemu-iotests/219
    M tests/qemu-iotests/219.out

  Log Message:
  -----------
  blockdev: unify qmp_drive_backup and drive-backup transaction paths

Issuing a drive-backup from qmp_drive_backup takes a slightly
different path than when it's issued from a transaction. In the code,
this is manifested as some redundancy between do_drive_backup() and
drive_backup_prepare().

This change unifies both paths, merging do_drive_backup() and
drive_backup_prepare(), and changing qmp_drive_backup() to create a
transaction instead of calling do_backup_common() direcly.

As a side-effect, now qmp_drive_backup() is executed inside a drained
section, as it happens when creating a drive-backup transaction. This
change is visible from the user's perspective, as the job gets paused
and immediately resumed before starting the actual work.

Also fix tests 141, 185 and 219 to cope with the extra
JOB_STATUS_CHANGE lines.

Signed-off-by: Sergio Lopez <address@hidden>
Reviewed-by: Kevin Wolf <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 5b7bfe515ecbd584b40ff6e41d2fd8b37c7d5139
      
https://github.com/qemu/qemu/commit/5b7bfe515ecbd584b40ff6e41d2fd8b37c7d5139
  Author: Sergio Lopez <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: unify qmp_blockdev_backup and blockdev-backup transaction paths

Issuing a blockdev-backup from qmp_blockdev_backup takes a slightly
different path than when it's issued from a transaction. In the code,
this is manifested as some redundancy between do_blockdev_backup() and
blockdev_backup_prepare().

This change unifies both paths, merging do_blockdev_backup() and
blockdev_backup_prepare(), and changing qmp_blockdev_backup() to
create a transaction instead of calling do_backup_common() direcly.

As a side-effect, now qmp_blockdev_backup() is executed inside a
drained section, as it happens when creating a blockdev-backup
transaction. This change is visible from the user's perspective, as
the job gets paused and immediately resumed before starting the actual
work.

Signed-off-by: Sergio Lopez <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Reviewed-by: Kevin Wolf <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 3ea67e08832775a28d0bd2795f01bc77e7ea1512
      
https://github.com/qemu/qemu/commit/3ea67e08832775a28d0bd2795f01bc77e7ea1512
  Author: Sergio Lopez <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: honor bdrv_try_set_aio_context() context requirements

bdrv_try_set_aio_context() requires that the old context is held, and
the new context is not held. Fix all the occurrences where it's not
done this way.

Suggested-by: Max Reitz <address@hidden>
Signed-off-by: Sergio Lopez <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 0abf2581717a19d9749d5c2ff8acd0ac203452c2
      
https://github.com/qemu/qemu/commit/0abf2581717a19d9749d5c2ff8acd0ac203452c2
  Author: Sergio Lopez <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M block/backup-top.c
    M block/backup.c

  Log Message:
  -----------
  block/backup-top: Don't acquire context while dropping top

All paths that lead to bdrv_backup_top_drop(), except for the call
from backup_clean(), imply that the BDS AioContext has already been
acquired, so doing it there too can potentially lead to QEMU hanging
on AIO_WAIT_WHILE().

An easy way to trigger this situation is by issuing a two actions
transaction, with a proper and a bogus blockdev-backup, so the second
one will trigger a rollback. This will trigger a hang with an stack
trace like this one:

 #0  0x00007fb680c75016 in __GI_ppoll (fds=0x55e74580f7c0, nfds=1, 
timeout=<optimized out>,
     timeout@entry=0x0, sigmask=sigmask@entry=0x0) at 
../sysdeps/unix/sysv/linux/ppoll.c:39
 #1  0x000055e743386e09 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized 
out>, __fds=<optimized out>)
     at /usr/include/bits/poll2.h:77
 #2  0x000055e743386e09 in qemu_poll_ns
     (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at 
util/qemu-timer.c:336
 #3  0x000055e743388dc4 in aio_poll (ctx=0x55e7458925d0, 
blocking=blocking@entry=true)
     at util/aio-posix.c:669
 #4  0x000055e743305dea in bdrv_flush (bs=bs@entry=0x55e74593c0d0) at 
block/io.c:2878
 #5  0x000055e7432be58e in bdrv_close (bs=0x55e74593c0d0) at block.c:4017
 #6  0x000055e7432be58e in bdrv_delete (bs=<optimized out>) at block.c:4262
 #7  0x000055e7432be58e in bdrv_unref (bs=bs@entry=0x55e74593c0d0) at 
block.c:5644
 #8  0x000055e743316b9b in bdrv_backup_top_drop (bs=bs@entry=0x55e74593c0d0) at 
block/backup-top.c:273
 #9  0x000055e74331461f in backup_job_create
     (job_id=0x0, bs=bs@entry=0x55e7458d5820, 
target=target@entry=0x55e74589f640, speed=0, sync_mode=MIRROR_SYNC_MODE_FULL, 
sync_bitmap=sync_bitmap@entry=0x0, bitmap_mode=BITMAP_SYNC_MODE_ON_SUCCESS, 
compress=false, filter_node_name=0x0, on_source_error=BLOCKDEV_ON_ERROR_REPORT, 
on_target_error=BLOCKDEV_ON_ERROR_REPORT, creation_flags=0, cb=0x0, opaque=0x0, 
txn=0x0, errp=0x7ffddfd1efb0) at block/backup.c:478
 #10 0x000055e74315bc52 in do_backup_common
     (backup=backup@entry=0x55e746c066d0, bs=bs@entry=0x55e7458d5820, 
target_bs=target_bs@entry=0x55e74589f640, 
aio_context=aio_context@entry=0x55e7458a91e0, txn=txn@entry=0x0, 
errp=errp@entry=0x7ffddfd1efb0)
     at blockdev.c:3580
 #11 0x000055e74315c37c in do_blockdev_backup
     (backup=backup@entry=0x55e746c066d0, txn=0x0, 
errp=errp@entry=0x7ffddfd1efb0)
     at 
/usr/src/debug/qemu-kvm-4.2.0-2.module+el8.2.0+5135+ed3b2489.x86_64/./qapi/qapi-types-block-core.h:1492
 #12 0x000055e74315c449 in blockdev_backup_prepare (common=0x55e746a8de90, 
errp=0x7ffddfd1f018)
     at blockdev.c:1885
 #13 0x000055e743160152 in qmp_transaction
     (dev_list=<optimized out>, has_props=<optimized out>, 
props=0x55e7467fe2c0, errp=errp@entry=0x7ffddfd1f088) at blockdev.c:2340
 #14 0x000055e743287ff5 in qmp_marshal_transaction
     (args=<optimized out>, ret=<optimized out>, errp=0x7ffddfd1f0f8)
     at qapi/qapi-commands-transaction.c:44
 #15 0x000055e74333de6c in do_qmp_dispatch
     (errp=0x7ffddfd1f0f0, allow_oob=<optimized out>, request=<optimized out>, 
cmds=0x55e743c28d60 <qmp_commands>) at qapi/qmp-dispatch.c:132
 #16 0x000055e74333de6c in qmp_dispatch
     (cmds=0x55e743c28d60 <qmp_commands>, request=<optimized out>, 
allow_oob=<optimized out>)
     at qapi/qmp-dispatch.c:175
 #17 0x000055e74325c061 in monitor_qmp_dispatch (mon=0x55e745908030, 
req=<optimized out>)
     at monitor/qmp.c:145
 #18 0x000055e74325c6fa in monitor_qmp_bh_dispatcher (data=<optimized out>) at 
monitor/qmp.c:234
 #19 0x000055e743385866 in aio_bh_call (bh=0x55e745807ae0) at util/async.c:117
 #20 0x000055e743385866 in aio_bh_poll (ctx=ctx@entry=0x55e7458067a0) at 
util/async.c:117
 #21 0x000055e743388c54 in aio_dispatch (ctx=0x55e7458067a0) at 
util/aio-posix.c:459
 #22 0x000055e743385742 in aio_ctx_dispatch
     (source=<optimized out>, callback=<optimized out>, user_data=<optimized 
out>) at util/async.c:260
 #23 0x00007fb68543e67d in g_main_dispatch (context=0x55e745893a40) at 
gmain.c:3176
 #24 0x00007fb68543e67d in g_main_context_dispatch 
(context=context@entry=0x55e745893a40) at gmain.c:3829
 #25 0x000055e743387d08 in glib_pollfds_poll () at util/main-loop.c:219
 #26 0x000055e743387d08 in os_host_main_loop_wait (timeout=<optimized out>) at 
util/main-loop.c:242
 #27 0x000055e743387d08 in main_loop_wait (nonblocking=<optimized out>) at 
util/main-loop.c:518
 #28 0x000055e74316a3c1 in main_loop () at vl.c:1828
 #29 0x000055e743016a72 in main (argc=<optimized out>, argv=<optimized out>, 
envp=<optimized out>)
     at vl.c:4504

Fix this by not acquiring the AioContext there, and ensuring all paths
leading to it have it already acquired (backup_clean()).

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1782111
Signed-off-by: Sergio Lopez <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 91005a495e228ebd7e5e173cd18f952450eef82d
      
https://github.com/qemu/qemu/commit/91005a495e228ebd7e5e173cd18f952450eef82d
  Author: Sergio Lopez <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: Acquire AioContext on dirty bitmap functions

Dirty map addition and removal functions are not acquiring to BDS
AioContext, while they may call to code that expects it to be
acquired.

This may trigger a crash with a stack trace like this one:

 #0  0x00007f0ef146370f in __GI_raise (sig=sig@entry=6)
     at ../sysdeps/unix/sysv/linux/raise.c:50
 #1  0x00007f0ef144db25 in __GI_abort () at abort.c:79
 #2  0x0000565022294dce in error_exit
     (err=<optimized out>, msg=msg@entry=0x56502243a730 <__func__.16350> 
"qemu_mutex_unlock_impl") at util/qemu-thread-posix.c:36
 #3  0x00005650222950ba in qemu_mutex_unlock_impl
     (mutex=mutex@entry=0x5650244b0240, file=file@entry=0x565022439adf 
"util/async.c", line=line@entry=526) at util/qemu-thread-posix.c:108
 #4  0x0000565022290029 in aio_context_release
     (ctx=ctx@entry=0x5650244b01e0) at util/async.c:526
 #5  0x000056502221cd08 in bdrv_can_store_new_dirty_bitmap
     (bs=bs@entry=0x5650244dc820, name=name@entry=0x56502481d360 "bitmap1", 
granularity=granularity@entry=65536, errp=errp@entry=0x7fff22831718)
     at block/dirty-bitmap.c:542
 #6  0x000056502206ae53 in qmp_block_dirty_bitmap_add
     (errp=0x7fff22831718, disabled=false, has_disabled=<optimized out>, 
persistent=<optimized out>, has_persistent=true, granularity=65536, 
has_granularity=<optimized out>, name=0x56502481d360 "bitmap1", node=<optimized 
out>) at blockdev.c:2894
 #7  0x000056502206ae53 in qmp_block_dirty_bitmap_add
     (node=<optimized out>, name=0x56502481d360 "bitmap1", 
has_granularity=<optimized out>, granularity=<optimized out>, 
has_persistent=true, persistent=<optimized out>, has_disabled=false, 
disabled=false, errp=0x7fff22831718) at blockdev.c:2856
 #8  0x00005650221847a3 in qmp_marshal_block_dirty_bitmap_add
     (args=<optimized out>, ret=<optimized out>, errp=0x7fff22831798)
     at qapi/qapi-commands-block-core.c:651
 #9  0x0000565022247e6c in do_qmp_dispatch
     (errp=0x7fff22831790, allow_oob=<optimized out>, request=<optimized out>, 
cmds=0x565022b32d60 <qmp_commands>) at qapi/qmp-dispatch.c:132
 #10 0x0000565022247e6c in qmp_dispatch
     (cmds=0x565022b32d60 <qmp_commands>, request=<optimized out>, 
allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175
 #11 0x0000565022166061 in monitor_qmp_dispatch
     (mon=0x56502450faa0, req=<optimized out>) at monitor/qmp.c:145
 #12 0x00005650221666fa in monitor_qmp_bh_dispatcher
     (data=<optimized out>) at monitor/qmp.c:234
 #13 0x000056502228f866 in aio_bh_call (bh=0x56502440eae0)
     at util/async.c:117
 #14 0x000056502228f866 in aio_bh_poll (ctx=ctx@entry=0x56502440d7a0)
     at util/async.c:117
 #15 0x0000565022292c54 in aio_dispatch (ctx=0x56502440d7a0)
     at util/aio-posix.c:459
 #16 0x000056502228f742 in aio_ctx_dispatch
     (source=<optimized out>, callback=<optimized out>, user_data=<optimized 
out>) at util/async.c:260
 #17 0x00007f0ef5ce667d in g_main_dispatch (context=0x56502449aa40)
     at gmain.c:3176
 #18 0x00007f0ef5ce667d in g_main_context_dispatch
     (context=context@entry=0x56502449aa40) at gmain.c:3829
 #19 0x0000565022291d08 in glib_pollfds_poll () at util/main-loop.c:219
 #20 0x0000565022291d08 in os_host_main_loop_wait
     (timeout=<optimized out>) at util/main-loop.c:242
 #21 0x0000565022291d08 in main_loop_wait (nonblocking=<optimized out>)
     at util/main-loop.c:518
 #22 0x00005650220743c1 in main_loop () at vl.c:1828
 #23 0x0000565021f20a72 in main
     (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
     at vl.c:4504

Fix this by acquiring the AioContext at qmp_block_dirty_bitmap_add()
and qmp_block_dirty_bitmap_add().

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1782175
Signed-off-by: Sergio Lopez <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 377410f6fb4f6b0d26d4a028c20766fae05de17e
      
https://github.com/qemu/qemu/commit/377410f6fb4f6b0d26d4a028c20766fae05de17e
  Author: Sergio Lopez <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M blockdev.c

  Log Message:
  -----------
  blockdev: Return bs to the proper context on snapshot abort

external_snapshot_abort() calls to bdrv_set_backing_hd(), which
returns state->old_bs to the main AioContext, as it's intended to be
used then the BDS is going to be released. As that's not the case when
aborting an external snapshot, return it to the AioContext it was
before the call.

This issue can be triggered by issuing a transaction with two actions,
a proper blockdev-snapshot-sync and a bogus one, so the second will
trigger a transaction abort. This results in a crash with an stack
trace like this one:

 #0  0x00007fa1048b28df in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:50
 #1  0x00007fa10489ccf5 in __GI_abort () at abort.c:79
 #2  0x00007fa10489cbc9 in __assert_fail_base
     (fmt=0x7fa104a03300 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", 
assertion=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == 
bdrv_get_aio_context(new_bs)", file=0x557224014d30 "block.c", line=2240, 
function=<optimized out>) at assert.c:92
 #3  0x00007fa1048aae96 in __GI___assert_fail
     (assertion=assertion@entry=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == 
bdrv_get_aio_context(new_bs)", file=file@entry=0x557224014d30 "block.c", 
line=line@entry=2240, function=function@entry=0x5572240b5d60 
<__PRETTY_FUNCTION__.31620> "bdrv_replace_child_noperm") at assert.c:101
 #4  0x0000557223e631f8 in bdrv_replace_child_noperm (child=0x557225b9c980, 
new_bs=new_bs@entry=0x557225c42e40) at block.c:2240
 #5  0x0000557223e68be7 in bdrv_replace_node (from=0x557226951a60, 
to=0x557225c42e40, errp=0x5572247d6138 <error_abort>) at block.c:4196
 #6  0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at 
blockdev.c:1731
 #7  0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at 
blockdev.c:1717
 #8  0x0000557223d09013 in qmp_transaction (dev_list=<optimized out>, 
has_props=<optimized out>, props=0x557225cc7d70, 
errp=errp@entry=0x7ffe704c0c98) at blockdev.c:2360
 #9  0x0000557223e32085 in qmp_marshal_transaction (args=<optimized out>, 
ret=<optimized out>, errp=0x7ffe704c0d08) at qapi/qapi-commands-transaction.c:44
 #10 0x0000557223ee798c in do_qmp_dispatch (errp=0x7ffe704c0d00, 
allow_oob=<optimized out>, request=<optimized out>, cmds=0x5572247d3cc0 
<qmp_commands>) at qapi/qmp-dispatch.c:132
 #11 0x0000557223ee798c in qmp_dispatch (cmds=0x5572247d3cc0 <qmp_commands>, 
request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175
 #12 0x0000557223e06141 in monitor_qmp_dispatch (mon=0x557225c69ff0, 
req=<optimized out>) at monitor/qmp.c:120
 #13 0x0000557223e0678a in monitor_qmp_bh_dispatcher (data=<optimized out>) at 
monitor/qmp.c:209
 #14 0x0000557223f2f366 in aio_bh_call (bh=0x557225b9dc60) at util/async.c:117
 #15 0x0000557223f2f366 in aio_bh_poll (ctx=ctx@entry=0x557225b9c840) at 
util/async.c:117
 #16 0x0000557223f32754 in aio_dispatch (ctx=0x557225b9c840) at 
util/aio-posix.c:459
 #17 0x0000557223f2f242 in aio_ctx_dispatch (source=<optimized out>, 
callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
 #18 0x00007fa10913467d in g_main_dispatch (context=0x557225c28e80) at 
gmain.c:3176
 #19 0x00007fa10913467d in g_main_context_dispatch 
(context=context@entry=0x557225c28e80) at gmain.c:3829
 #20 0x0000557223f31808 in glib_pollfds_poll () at util/main-loop.c:219
 #21 0x0000557223f31808 in os_host_main_loop_wait (timeout=<optimized out>) at 
util/main-loop.c:242
 #22 0x0000557223f31808 in main_loop_wait (nonblocking=<optimized out>) at 
util/main-loop.c:518
 #23 0x0000557223d13201 in main_loop () at vl.c:1828
 #24 0x0000557223bbfb82 in main (argc=<optimized out>, argv=<optimized out>, 
envp=<optimized out>) at vl.c:4504

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1779036
Signed-off-by: Sergio Lopez <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 9b8c59e7610b9c5315ef093d801843dbe8debfac
      
https://github.com/qemu/qemu/commit/9b8c59e7610b9c5315ef093d801843dbe8debfac
  Author: Sergio Lopez <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    A tests/qemu-iotests/281
    A tests/qemu-iotests/281.out
    M tests/qemu-iotests/group

  Log Message:
  -----------
  iotests: Test handling of AioContexts with some blockdev actions

Includes the following tests:

 - Adding a dirty bitmap.
   * RHBZ: 1782175

 - Starting a drive-mirror to an NBD-backed target.
   * RHBZ: 1746217, 1773517

 - Aborting an external snapshot transaction.
   * RHBZ: 1779036

 - Aborting a blockdev backup transaction.
   * RHBZ: 1782111

For each one of them, a VM with a number of disks running in an
IOThread AioContext is used.

Signed-off-by: Sergio Lopez <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: fb574de81bfdd71fdb0315105a3a7761efb68395
      
https://github.com/qemu/qemu/commit/fb574de81bfdd71fdb0315105a3a7761efb68395
  Author: Eiichi Tsukata <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M block/backup-top.c

  Log Message:
  -----------
  block/backup: fix memory leak in bdrv_backup_top_append()

bdrv_open_driver() allocates bs->opaque according to drv->instance_size.
There is no need to allocate it and overwrite opaque in
bdrv_backup_top_append().

Reproducer:

  $ QTEST_QEMU_BINARY=./x86_64-softmmu/qemu-system-x86_64 valgrind -q 
--leak-check=full tests/test-replication -p /replication/secondary/start
  ==29792== 24 bytes in 1 blocks are definitely lost in loss record 52 of 226
  ==29792==    at 0x483AB1A: calloc (vg_replace_malloc.c:762)
  ==29792==    by 0x4B07CE0: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.6000.7)
  ==29792==    by 0x12BAB9: bdrv_open_driver (block.c:1289)
  ==29792==    by 0x12BEA9: bdrv_new_open_driver (block.c:1359)
  ==29792==    by 0x1D15CB: bdrv_backup_top_append (backup-top.c:190)
  ==29792==    by 0x1CC11A: backup_job_create (backup.c:439)
  ==29792==    by 0x1CD542: replication_start (replication.c:544)
  ==29792==    by 0x1401B9: replication_start_all (replication.c:52)
  ==29792==    by 0x128B50: test_secondary_start (test-replication.c:427)
  ...

Fixes: 7df7868b9640 ("block: introduce backup-top filter driver")
Signed-off-by: Eiichi Tsukata <address@hidden>
Reviewed-by: Vladimir Sementsov-Ogievskiy <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 693fd2acdf14dd86c0bf852610f1c2cca80a74dc
      
https://github.com/qemu/qemu/commit/693fd2acdf14dd86c0bf852610f1c2cca80a74dc
  Author: Felipe Franciosi <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M block/iscsi.c

  Log Message:
  -----------
  iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711)

When querying an iSCSI server for the provisioning status of blocks (via
GET LBA STATUS), Qemu only validates that the response descriptor zero's
LBA matches the one requested. Given the SCSI spec allows servers to
respond with the status of blocks beyond the end of the LUN, Qemu may
have its heap corrupted by clearing/setting too many bits at the end of
its allocmap for the LUN.

A malicious guest in control of the iSCSI server could carefully program
Qemu's heap (by selectively setting the bitmap) and then smash it.

This limits the number of bits that iscsi_co_block_status() will try to
update in the allocmap so it can't overflow the bitmap.

Fixes: CVE-2020-1711
Cc: address@hidden
Signed-off-by: Felipe Franciosi <address@hidden>
Signed-off-by: Peter Turschmid <address@hidden>
Signed-off-by: Raphael Norwitz <address@hidden>
Signed-off-by: Kevin Wolf <address@hidden>


  Commit: 5fbf1d56c24018772e900a40a0955175ff82f35c
      
https://github.com/qemu/qemu/commit/5fbf1d56c24018772e900a40a0955175ff82f35c
  Author: Kevin Wolf <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M block/iscsi.c

  Log Message:
  -----------
  iscsi: Don't access non-existent scsi_lba_status_descriptor

In iscsi_co_block_status(), we may have received num_descriptors == 0
from the iscsi server. Therefore, we can't unconditionally access
lbas->descriptors[0]. Add the missing check.

Signed-off-by: Kevin Wolf <address@hidden>
Reviewed-by: Felipe Franciosi <address@hidden>
Reviewed-by: Philippe Mathieu-Daudé <address@hidden>
Reviewed-by: John Snow <address@hidden>
Reviewed-by: Peter Lieven <address@hidden>


  Commit: 750fe5989f9efffce86368c6feac013f8b7b433c
      
https://github.com/qemu/qemu/commit/750fe5989f9efffce86368c6feac013f8b7b433c
  Author: Peter Maydell <address@hidden>
  Date:   2020-01-27 (Mon, 27 Jan 2020)

  Changed paths:
    M block/backup-top.c
    M block/backup.c
    M block/iscsi.c
    M blockdev.c
    M tests/qemu-iotests/030
    M tests/qemu-iotests/040
    M tests/qemu-iotests/041
    M tests/qemu-iotests/141.out
    M tests/qemu-iotests/185.out
    M tests/qemu-iotests/219
    M tests/qemu-iotests/219.out
    M tests/qemu-iotests/234
    M tests/qemu-iotests/245
    M tests/qemu-iotests/262
    M tests/qemu-iotests/280
    A tests/qemu-iotests/281
    A tests/qemu-iotests/281.out
    M tests/qemu-iotests/group
    M tests/qemu-iotests/iotests.py

  Log Message:
  -----------
  Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging

Block layer patches:

- iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711)
- AioContext fixes in QMP commands for backup and bitmaps
- iotests fixes

# gpg: Signature made Mon 27 Jan 2020 17:49:58 GMT
# gpg:                using RSA key 7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <address@hidden>" [full]
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74  56FE 7F09 B272 C88F 2FD6

* remotes/kevin/tags/for-upstream:
  iscsi: Don't access non-existent scsi_lba_status_descriptor
  iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711)
  block/backup: fix memory leak in bdrv_backup_top_append()
  iotests: Test handling of AioContexts with some blockdev actions
  blockdev: Return bs to the proper context on snapshot abort
  blockdev: Acquire AioContext on dirty bitmap functions
  block/backup-top: Don't acquire context while dropping top
  blockdev: honor bdrv_try_set_aio_context() context requirements
  blockdev: unify qmp_blockdev_backup and blockdev-backup transaction paths
  blockdev: unify qmp_drive_backup and drive-backup transaction paths
  blockdev: fix coding style issues in drive_backup_prepare
  iotests: Add more "skip_if_unsupported" statements to the python tests
  iotests.py: Let wait_migration wait even more

Signed-off-by: Peter Maydell <address@hidden>


Compare: https://github.com/qemu/qemu/compare/105b07f1ba46...750fe5989f9e



reply via email to

[Prev in Thread] Current Thread [Next in Thread]