qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] thread-pool: fix deadlock when callbacks dep


From: Marcin Gibuła
Subject: Re: [Qemu-devel] [PATCH v2] thread-pool: fix deadlock when callbacks depends on each other
Date: Wed, 04 Jun 2014 12:31:16 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0

On 04.06.2014 12:01, Stefan Hajnoczi wrote:
On Mon, Jun 02, 2014 at 09:15:27AM +0200, Marcin Gibuła wrote:
When two coroutines submit I/O and first coroutine depends on second to
complete (by calling bdrv_drain_all), deadlock may occur.

bdrv_drain_all() is a very heavy-weight operation.  Coroutines should
avoid it if possible.  Please post the file/line/function where this
call was made, perhaps there is a better way to wait for the other
coroutine.  This isn't a fix for this bug but it's a cleanup.

As in original bug report:

#4 0x00007f699c095c0a in bdrv_drain_all () at /var/tmp/portage/app-emulation/qemu-2.0.0_rc2/work/qemu-2.0.0-rc2/block.c:1805 #5 0x00007f699c09c87e in bdrv_close (address@hidden) at /var/tmp/portage/app-emulation/qemu-2.0.0_rc2/work/qemu-2.0.0-rc2/block.c:1695 #6 0x00007f699c09c5fa in bdrv_delete (bs=0x7f699f0bc520) at /var/tmp/portage/app-emulation/qemu-2.0.0_rc2/work/qemu-2.0.0-rc2/block.c:1978 #7 bdrv_unref (bs=0x7f699f0bc520) at /var/tmp/portage/app-emulation/qemu-2.0.0_rc2/work/qemu-2.0.0-rc2/block.c:5198 #8 0x00007f699c09c812 in bdrv_drop_intermediate (address@hidden, address@hidden, address@hidden) at /var/tmp/portage/app-emulation/qemu-2.0.0_rc2/work/qemu-2.0.0-rc2/block.c:2567 #9 0x00007f699c0a1963 in commit_run (opaque=0x7f699f17dcc0) at /var/tmp/portage/app-emulation/qemu-2.0.0_rc2/work/qemu-2.0.0-rc2/block/commit.c:144 #10 0x00007f699c0e0dca in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at /var/tmp/portage/app-emulation/qemu-2.0.0_rc2/work/qemu-2.0.0-rc2/coroutine-ucontext.c:118


mirror_run probably has this as well. I didn't check others.

--
mg



reply via email to

[Prev in Thread] Current Thread [Next in Thread]