[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-commits] [qemu/qemu] 1321a0: qemu-nbd: Change default cache mode t
From: |
Peter Maydell |
Subject: |
[Qemu-commits] [qemu/qemu] 1321a0: qemu-nbd: Change default cache mode to writeback |
Date: |
Tue, 28 Sep 2021 07:44:57 -0700 |
Branch: refs/heads/staging
Home: https://github.com/qemu/qemu
Commit: 1321a0b4bf4852effd00bec296a110e30da1e403
https://github.com/qemu/qemu/commit/1321a0b4bf4852effd00bec296a110e30da1e403
Author: Nir Soffer <nirsof@gmail.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M docs/tools/qemu-nbd.rst
M qemu-nbd.c
Log Message:
-----------
qemu-nbd: Change default cache mode to writeback
Both qemu and qemu-img use writeback cache mode by default, which is
already documented in qemu(1). qemu-nbd uses writethrough cache mode by
default, and the default cache mode is not documented.
According to the qemu-nbd(8):
--cache=CACHE
The cache mode to be used with the file. See the
documentation of the emulator's -drive cache=... option for
allowed values.
qemu(1) says:
The default mode is cache=writeback.
So users have no reason to assume that qemu-nbd is using writethough
cache mode. The only hint is the painfully slow writing when using the
defaults.
Looking in git history, it seems that qemu used writethrough in the past
to support broken guests that did not flush data properly, or could not
flush due to limitations in qemu. But qemu-nbd clients can use
NBD_CMD_FLUSH to flush data, so using writethrough does not help anyone.
Change the default cache mode to writback, and document the default and
available values properly in the online help and manual.
With this change converting image via qemu-nbd is 3.5 times faster.
$ qemu-img create dst.img 50g
$ qemu-nbd -t -f raw -k /tmp/nbd.sock dst.img
Before this change:
$ hyperfine -r3 "./qemu-img convert -p -f raw -O raw -T none -W
fedora34.img nbd+unix:///?socket=/tmp/nbd.sock"
Benchmark #1: ./qemu-img convert -p -f raw -O raw -T none -W fedora34.img
nbd+unix:///?socket=/tmp/nbd.sock
Time (mean ± σ): 83.639 s ± 5.970 s [User: 2.733 s, System: 6.112
s]
Range (min … max): 76.749 s … 87.245 s 3 runs
After this change:
$ hyperfine -r3 "./qemu-img convert -p -f raw -O raw -T none -W
fedora34.img nbd+unix:///?socket=/tmp/nbd.sock"
Benchmark #1: ./qemu-img convert -p -f raw -O raw -T none -W fedora34.img
nbd+unix:///?socket=/tmp/nbd.sock
Time (mean ± σ): 23.522 s ± 0.433 s [User: 2.083 s, System: 5.475
s]
Range (min … max): 23.234 s … 24.019 s 3 runs
Users can avoid the issue by using --cache=writeback[1] but the defaults
should give good performance for the common use case.
[1] https://bugzilla.redhat.com/1990656
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Message-Id: <20210813205519.50518-1-nsoffer@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: 005c798fbd83b95103d1b88cb6ff5b1110d259f4
https://github.com/qemu/qemu/commit/005c798fbd83b95103d1b88cb6ff5b1110d259f4
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/io.c
Log Message:
-----------
block/io: bring request check to bdrv_co_(read,write)v_vmstate
Only qcow2 driver supports vmstate.
In qcow2 these requests go through .bdrv_co_p{read,write}v_part
handlers.
So, let's do our basic check for the request on vmstate generic
handlers.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210903102807.27127-2-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: 4451721424455576aa2a06fcfdce40c8df0e7163
https://github.com/qemu/qemu/commit/4451721424455576aa2a06fcfdce40c8df0e7163
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/io.c
M block/qcow2.c
M include/block/block_int.h
Log Message:
-----------
qcow2: check request on vmstate save/load path
We modify the request by adding an offset to vmstate. Let's check the
modified request. It will help us to safely move .bdrv_co_preadv_part
and .bdrv_co_pwritev_part to int64_t type of offset and bytes.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210903102807.27127-3-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: c8efca757e9d69aba08d8953b82262c5840e857b
https://github.com/qemu/qemu/commit/c8efca757e9d69aba08d8953b82262c5840e857b
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/blkdebug.c
M block/blklogwrites.c
M block/blkreplay.c
M block/blkverify.c
M block/bochs.c
M block/cloop.c
M block/commit.c
M block/copy-before-write.c
M block/copy-on-read.c
M block/crypto.c
M block/curl.c
M block/dmg.c
M block/file-posix.c
M block/file-win32.c
M block/filter-compress.c
M block/mirror.c
M block/nbd.c
M block/nfs.c
M block/null.c
M block/nvme.c
M block/preallocate.c
M block/qcow.c
M block/qcow2-cluster.c
M block/qcow2.c
M block/quorum.c
M block/raw-format.c
M block/rbd.c
M block/throttle.c
M block/vdi.c
M block/vmdk.c
M block/vpc.c
M block/vvfat.c
M include/block/block_int.h
M tests/unit/test-bdrv-drain.c
M tests/unit/test-block-iothread.c
Log Message:
-----------
block: use int64_t instead of uint64_t in driver read handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver read handlers parameters which are already 64bit to
signed type.
While being here, convert also flags parameter to be BdrvRequestFlags.
Now let's consider all callers. Simple
git grep '\->bdrv_\(aio\|co\)_preadv\(_part\)\?'
shows that's there three callers of driver function:
bdrv_driver_preadv() in block/io.c, passes int64_t, checked by
bdrv_check_qiov_request() to be non-negative.
qcow2_load_vmstate() does bdrv_check_qiov_request().
do_perform_cow_read() has uint64_t argument. And a lot of things in
qcow2 driver are uint64_t, so converting it is big job. But we must
not work with requests that don't satisfy bdrv_check_qiov_request(),
so let's just assert it here.
Still, the functions may be called directly, not only by drv->...
Let's check:
git grep '\.bdrv_\(aio\|co\)_preadv\(_part\)\?\s*=' | \
awk '{print $4}' | sed 's/,//' | sed 's/&//' | sort | uniq | \
while read func; do git grep "$func(" | \
grep -v "$func(BlockDriverState"; done
The only one such caller:
QEMUIOVector qiov = QEMU_IOVEC_INIT_BUF(qiov, &data, 1);
...
ret = bdrv_replace_test_co_preadv(bs, 0, 1, &qiov, 0);
in tests/unit/test-bdrv-drain.c, and it's OK obviously.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-4-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: fix typos]
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: 50e14f7dfc95a9bbd5330229d12c8246fdba3be9
https://github.com/qemu/qemu/commit/50e14f7dfc95a9bbd5330229d12c8246fdba3be9
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/blkdebug.c
M block/blklogwrites.c
M block/blkreplay.c
M block/blkverify.c
M block/copy-before-write.c
M block/copy-on-read.c
M block/crypto.c
M block/file-posix.c
M block/file-win32.c
M block/filter-compress.c
M block/io.c
M block/mirror.c
M block/nbd.c
M block/nfs.c
M block/null.c
M block/nvme.c
M block/preallocate.c
M block/qcow.c
M block/qcow2.c
M block/quorum.c
M block/raw-format.c
M block/rbd.c
M block/throttle.c
M block/trace-events
M block/vdi.c
M block/vmdk.c
M block/vpc.c
M block/vvfat.c
M include/block/block_int.h
M tests/unit/test-block-iothread.c
Log Message:
-----------
block: use int64_t instead of uint64_t in driver write handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver write handlers parameters which are already 64bit to
signed type.
While being here, convert also flags parameter to be BdrvRequestFlags.
Now let's consider all callers. Simple
git grep '\->bdrv_\(aio\|co\)_pwritev\(_part\)\?'
shows that's there three callers of driver function:
bdrv_driver_pwritev() and bdrv_driver_pwritev_compressed() in
block/io.c, both pass int64_t, checked by bdrv_check_qiov_request() to
be non-negative.
qcow2_save_vmstate() does bdrv_check_qiov_request().
Still, the functions may be called directly, not only by drv->...
Let's check:
git grep '\.bdrv_\(aio\|co\)_pwritev\(_part\)\?\s*=' | \
awk '{print $4}' | sed 's/,//' | sed 's/&//' | sort | uniq | \
while read func; do git grep "$func(" | \
grep -v "$func(BlockDriverState"; done
shows several callers:
qcow2:
qcow2_co_truncate() write at most up to @offset, which is checked in
generic qcow2_co_truncate() by bdrv_check_request().
qcow2_co_pwritev_compressed_task() pass the request (or part of the
request) that already went through normal write path, so it should
be OK
qcow:
qcow_co_pwritev_compressed() pass int64_t, it's updated by this patch
quorum:
quorum_co_pwrite_zeroes() pass int64_t and int - OK
throttle:
throttle_co_pwritev_compressed() pass int64_t, it's updated by this
patch
vmdk:
vmdk_co_pwritev_compressed() pass int64_t, it's updated by this
patch
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-5-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: bb85890ea7988921d93af2cfa40b1717618e1b59
https://github.com/qemu/qemu/commit/bb85890ea7988921d93af2cfa40b1717618e1b59
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/file-posix.c
M block/iscsi.c
M block/qcow2.c
M block/raw-format.c
M include/block/block_int.h
Log Message:
-----------
block: use int64_t instead of uint64_t in copy_range driver handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver copy_range handlers parameters which are already
64bit to signed type.
Now let's consider all callers. Simple
git grep '\->bdrv_co_copy_range'
shows the only caller:
bdrv_co_copy_range_internal(), which does bdrv_check_request32(),
so everything is OK.
Still, the functions may be called directly, not only by drv->...
Let's check:
git grep '\.bdrv_co_copy_range_\(from\|to\)\s*=' | \
awk '{print $4}' | sed 's/,//' | sed 's/&//' | sort | uniq | \
while read func; do git grep "$func(" | \
grep -v "$func(BlockDriverState"; done
shows no more callers. So, we are done.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210903102807.27127-6-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: 1d9ffd51e054965799f7832f4cbb16daab4a139c
https://github.com/qemu/qemu/commit/1d9ffd51e054965799f7832f4cbb16daab4a139c
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/io.c
M include/block/block_int.h
Log Message:
-----------
block: make BlockLimits::max_pwrite_zeroes 64bit
We are going to support 64 bit write-zeroes requests. Now update the
limit variable. It's absolutely safe. The variable is set in some
drivers, and used in bdrv_co_do_pwrite_zeroes().
Update also max_write_zeroes variable in bdrv_co_do_pwrite_zeroes(), so
that bdrv_co_do_pwrite_zeroes() is now prepared to 64bit requests. The
remaining logic including num, offset and bytes variables is already
supporting 64bit requests.
So the only thing that prevents 64 bit requests is limiting
max_write_zeroes variable to INT_MAX in bdrv_co_do_pwrite_zeroes().
We'll drop this limitation after updating all block drivers.
Ah, we also have bdrv_check_request32() in bdrv_co_pwritev_part(). It
will be modified to do bdrv_check_request() for write-zeroes path.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-7-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: a47df0fb74625c33c4bdc858781983d90e5861f3
https://github.com/qemu/qemu/commit/a47df0fb74625c33c4bdc858781983d90e5861f3
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/blkdebug.c
M block/blklogwrites.c
M block/blkreplay.c
M block/copy-before-write.c
M block/copy-on-read.c
M block/file-posix.c
M block/filter-compress.c
M block/gluster.c
M block/iscsi.c
M block/mirror.c
M block/nbd.c
M block/nvme.c
M block/preallocate.c
M block/qcow2.c
M block/qed.c
M block/quorum.c
M block/raw-format.c
M block/rbd.c
M block/throttle.c
M block/trace-events
M block/vmdk.c
M include/block/block_int.h
Log Message:
-----------
block: use int64_t instead of int in driver write_zeroes handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver write_zeroes handlers bytes parameter to int64_t.
The only caller of all updated function is bdrv_co_do_pwrite_zeroes().
bdrv_co_do_pwrite_zeroes() itself is of course OK with widening of
callee parameter type. Also, bdrv_co_do_pwrite_zeroes()'s
max_write_zeroes is limited to INT_MAX. So, updated functions all are
safe, they will not get "bytes" larger than before.
Still, let's look through all updated functions, and add assertions to
the ones which are actually unprepared to values larger than INT_MAX.
For these drivers also set explicit max_pwrite_zeroes limit.
Let's go:
blkdebug: calculations can't overflow, thanks to
bdrv_check_qiov_request() in generic layer. rule_check() and
bdrv_co_pwrite_zeroes() both have 64bit argument.
blklogwrites: pass to blk_log_writes_co_log() with 64bit argument.
blkreplay, copy-on-read, filter-compress: pass to
bdrv_co_pwrite_zeroes() which is OK
copy-before-write: Calls cbw_do_copy_before_write() and
bdrv_co_pwrite_zeroes, both have 64bit argument.
file-posix: both handler calls raw_do_pwrite_zeroes, which is updated.
In raw_do_pwrite_zeroes() calculations are OK due to
bdrv_check_qiov_request(), bytes go to RawPosixAIOData::aio_nbytes
which is uint64_t.
Check also where that uint64_t gets handed:
handle_aiocb_write_zeroes_block() passes a uint64_t[2] to
ioctl(BLKZEROOUT), handle_aiocb_write_zeroes() calls do_fallocate()
which takes off_t (and we compile to always have 64-bit off_t), as
does handle_aiocb_write_zeroes_unmap. All look safe.
gluster: bytes go to GlusterAIOCB::size which is int64_t and to
glfs_zerofill_async works with off_t.
iscsi: Aha, here we deal with iscsi_writesame16_task() that has
uint32_t num_blocks argument and iscsi_writesame16_task() has
uint16_t argument. Make comments, add assertions and clarify
max_pwrite_zeroes calculation.
iscsi_allocmap_() functions already has int64_t argument
is_byte_request_lun_aligned is simple to update, do it.
mirror_top: pass to bdrv_mirror_top_do_write which has uint64_t
argument
nbd: Aha, here we have protocol limitation, and NBDRequest::len is
uint32_t. max_pwrite_zeroes is cleanly set to 32bit value, so we are
OK for now.
nvme: Again, protocol limitation. And no inherent limit for
write-zeroes at all. But from code that calculates cdw12 it's obvious
that we do have limit and alignment. Let's clarify it. Also,
obviously the code is not prepared to handle bytes=0. Let's handle
this case too.
trace events already 64bit
preallocate: pass to handle_write() and bdrv_co_pwrite_zeroes(), both
64bit.
rbd: pass to qemu_rbd_start_co() which is 64bit.
qcow2: offset + bytes and alignment still works good (thanks to
bdrv_check_qiov_request()), so tail calculation is OK
qcow2_subcluster_zeroize() has 64bit argument, should be OK
trace events updated
qed: qed_co_request wants int nb_sectors. Also in code we have size_t
used for request length which may be 32bit. So, let's just keep
INT_MAX as a limit (aligning it down to pwrite_zeroes_alignment) and
don't care.
raw-format: Is OK. raw_adjust_offset and bdrv_co_pwrite_zeroes are both
64bit.
throttle: Both throttle_group_co_io_limits_intercept() and
bdrv_co_pwrite_zeroes() are 64bit.
vmdk: pass to vmdk_pwritev which is 64bit
quorum: pass to quorum_co_pwritev() which is 64bit
Hooray!
At this point all block drivers are prepared to support 64bit
write-zero requests, or have explicitly set max_pwrite_zeroes.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-8-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: use <= rather than < in assertions relying on max_pwrite_zeroes]
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: 46836772a967e8af3eafea2058f96e9ad0663921
https://github.com/qemu/qemu/commit/46836772a967e8af3eafea2058f96e9ad0663921
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/io.c
Log Message:
-----------
block/io: allow 64bit write-zeroes requests
Now that all drivers are updated by previous commit, we can drop two
last limiters on write-zeroes path: INT_MAX in
bdrv_co_do_pwrite_zeroes() and bdrv_check_request32() in
bdrv_co_pwritev_part().
Now everything is prepared for implementing incredibly cool and fast
big-write-zeroes in NBD and qcow2. And any other driver which wants it
of course.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210903102807.27127-9-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: b5c5996f052b2bfebc297a065377d9aa4c82f3cc
https://github.com/qemu/qemu/commit/b5c5996f052b2bfebc297a065377d9aa4c82f3cc
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/io.c
M include/block/block_int.h
Log Message:
-----------
block: make BlockLimits::max_pdiscard 64bit
We are going to support 64 bit discard requests. Now update the
limit variable. It's absolutely safe. The variable is set in some
drivers, and used in bdrv_co_pdiscard().
Update also max_pdiscard variable in bdrv_co_pdiscard(), so that
bdrv_co_pdiscard() is now prepared for 64bit requests. The remaining
logic including num, offset and bytes variables is already
supporting 64bit requests.
So the only thing that prevents 64 bit requests is limiting
max_pdiscard variable to INT_MAX in bdrv_co_pdiscard().
We'll drop this limitation after updating all block drivers.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210903102807.27127-10-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: 00701df818ff90f5b7b8bf5e7b773177e2d1c8e7
https://github.com/qemu/qemu/commit/00701df818ff90f5b7b8bf5e7b773177e2d1c8e7
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/blkdebug.c
M block/blklogwrites.c
M block/blkreplay.c
M block/copy-before-write.c
M block/copy-on-read.c
M block/file-posix.c
M block/filter-compress.c
M block/gluster.c
M block/iscsi.c
M block/mirror.c
M block/nbd.c
M block/nvme.c
M block/preallocate.c
M block/qcow2.c
M block/raw-format.c
M block/rbd.c
M block/throttle.c
M block/trace-events
M include/block/block_int.h
M tests/unit/test-block-iothread.c
Log Message:
-----------
block: use int64_t instead of int in driver discard handlers
We are generally moving to int64_t for both offset and bytes parameters
on all io paths.
Main motivation is realization of 64-bit write_zeroes operation for
fast zeroing large disk chunks, up to the whole disk.
We chose signed type, to be consistent with off_t (which is signed) and
with possibility for signed return type (where negative value means
error).
So, convert driver discard handlers bytes parameter to int64_t.
The only caller of all updated function is bdrv_co_pdiscard in
block/io.c. It is already prepared to work with 64bit requests, but
pass at most max(bs->bl.max_pdiscard, INT_MAX) to the driver.
Let's look at all updated functions:
blkdebug: all calculations are still OK, thanks to
bdrv_check_qiov_request().
both rule_check and bdrv_co_pdiscard are 64bit
blklogwrites: pass to blk_loc_writes_co_log which is 64bit
blkreplay, copy-on-read, filter-compress: pass to bdrv_co_pdiscard, OK
copy-before-write: pass to bdrv_co_pdiscard which is 64bit and to
cbw_do_copy_before_write which is 64bit
file-posix: one handler calls raw_account_discard() is 64bit and both
handlers calls raw_do_pdiscard(). Update raw_do_pdiscard, which pass
to RawPosixAIOData::aio_nbytes, which is 64bit (and calls
raw_account_discard())
gluster: somehow, third argument of glfs_discard_async is size_t.
Let's set max_pdiscard accordingly.
iscsi: iscsi_allocmap_set_invalid is 64bit,
!is_byte_request_lun_aligned is 64bit.
list.num is uint32_t. Let's clarify max_pdiscard and
pdiscard_alignment.
mirror_top: pass to bdrv_mirror_top_do_write() which is
64bit
nbd: protocol limitation. max_pdiscard is alredy set strict enough,
keep it as is for now.
nvme: buf.nlb is uint32_t and we do shift. So, add corresponding limits
to nvme_refresh_limits().
preallocate: pass to bdrv_co_pdiscard() which is 64bit.
rbd: pass to qemu_rbd_start_co() which is 64bit.
qcow2: calculations are still OK, thanks to bdrv_check_qiov_request(),
qcow2_cluster_discard() is 64bit.
raw-format: raw_adjust_offset() is 64bit, bdrv_co_pdiscard too.
throttle: pass to bdrv_co_pdiscard() which is 64bit and to
throttle_group_co_io_limits_intercept() which is 64bit as well.
test-block-iothread: bytes argument is unused
Great! Now all drivers are prepared to handle 64bit discard requests,
or else have explicit max_pdiscard limits.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210903102807.27127-11-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: e534c978b001f749792c5b5c2c3ab9b87a7b6eb6
https://github.com/qemu/qemu/commit/e534c978b001f749792c5b5c2c3ab9b87a7b6eb6
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/io.c
Log Message:
-----------
block/io: allow 64bit discard requests
Now that all drivers are updated by the previous commit, we can drop
the last limiter on pdiscard path: INT_MAX in bdrv_co_pdiscard().
Now everything is prepared for implementing incredibly cool and fast
big-discard requests in NBD and qcow2. And any other driver which wants
it of course.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210903102807.27127-12-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: e936eb1d97c99da3bbb00f9a66eacb2621eaa4dc
https://github.com/qemu/qemu/commit/e936eb1d97c99da3bbb00f9a66eacb2621eaa4dc
Author: Eric Blake <eblake@redhat.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M nbd/server.c
Log Message:
-----------
nbd/server: Allow LIST_META_CONTEXT without STRUCTURED_REPLY
The NBD protocol just relaxed the requirements on
NBD_OPT_LIST_META_CONTEXT:
https://github.com/NetworkBlockDevice/nbd/commit/13a4e33a87
Since listing is not stateful (unlike SET_META_CONTEXT), we don't care
if a client asks for meta contexts without first requesting structured
replies. Well-behaved clients will still ask for structured reply
first (if for no other reason than for back-compat to older servers),
but that's no reason to avoid this change.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210907173505.1499709-1-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Commit: 254f9a861985a8624d717bd3da3f3fabd879dabf
https://github.com/qemu/qemu/commit/254f9a861985a8624d717bd3da3f3fabd879dabf
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M nbd/client-connection.c
Log Message:
-----------
nbd/client-connection: nbd_co_establish_connection(): fix non set errp
When we don't have a connection and blocking is false, we return NULL
but don't set errp. That's wrong.
We have two paths for calling nbd_co_establish_connection():
1. nbd_open() -> nbd_do_establish_connection() -> ...
but that will never set blocking=false
2. nbd_reconnect_attempt() -> nbd_co_do_establish_connection() -> ...
but that uses errp=NULL
So, we are safe with our wrong errp policy in
nbd_co_establish_connection(). Still let's fix it.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210906190654.183421-2-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: d4d88fe3c75a50cac23518b1aae657492170b0cd
https://github.com/qemu/qemu/commit/d4d88fe3c75a50cac23518b1aae657492170b0cd
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/nbd.c
Log Message:
-----------
block/nbd: nbd_channel_error() shutdown channel unconditionally
Don't rely on connection being totally broken in case of -EIO. Safer
and more correct is to just shut down the channel anyway, since we
change the state and plan on reconnecting.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210902103805.25686-2-vsementsov@virtuozzo.com>
[eblake: grammar tweaks]
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: e60e2c1c0db472c5f160ed1f4859ccdfa810d2dd
https://github.com/qemu/qemu/commit/e60e2c1c0db472c5f160ed1f4859ccdfa810d2dd
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/nbd.c
Log Message:
-----------
block/nbd: move nbd_recv_coroutines_wake_all() up
We are going to use it in nbd_channel_error(), so move it up. Note,
that we are going also refactor and rename
nbd_recv_coroutines_wake_all() in future anyway, so keeping it where it
is and making forward declaration doesn't make real sense.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20210902103805.25686-3-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: dd371152b559cc991efbce343c223fe33d317e71
https://github.com/qemu/qemu/commit/dd371152b559cc991efbce343c223fe33d317e71
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/nbd.c
Log Message:
-----------
block/nbd: refactor nbd_recv_coroutines_wake_all()
Split out nbd_recv_coroutine_wake_one(), as it will be used
separately.
Rename the function and add a possibility to wake only first found
sleeping coroutine.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210902103805.25686-4-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: grammar tweak]
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: b229369a7a44a0b4fcbb8f73060fe11d7cf8e241
https://github.com/qemu/qemu/commit/b229369a7a44a0b4fcbb8f73060fe11d7cf8e241
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/nbd.c
M nbd/client.c
Log Message:
-----------
block/nbd: drop connection_co
OK, that's a big rewrite of the logic.
Pre-patch we have an always running coroutine - connection_co. It does
reply receiving and reconnecting. And it leads to a lot of difficult
and unobvious code around drained sections and context switch. We also
abuse bs->in_flight counter which is increased for connection_co and
temporary decreased in points where we want to allow drained section to
begin. One of these place is in another file: in nbd_read_eof() in
nbd/client.c.
We also cancel reconnect and requests waiting for reconnect on drained
begin which is not correct. And this patch fixes that.
Let's finally drop this always running coroutine and go another way:
do both reconnect and receiving in request coroutines.
The detailed list of changes below (in the sequence of diff hunks).
1. receiving coroutines are woken directly from nbd_channel_error, when
we change s->state
2. nbd_co_establish_connection_cancel(): we don't have drain_begin now,
and in nbd_teardown_connection() all requests should already be
finished (and reconnect is done from request). So
nbd_co_establish_connection_cancel() is called from
nbd_cancel_in_flight() (to cancel the request that is doing
nbd_co_establish_connection()) and from reconnect_delay_timer_cb()
(previously we didn't need it, as reconnect delay only should cancel
active requests not the reconnection itself). But now reconnection
itself is done in the separate thread (we now call
nbd_client_connection_enable_retry() in nbd_open()), and we need to
cancel the requests that wait in nbd_co_establish_connection()
now).
2A. We do receive headers in request coroutine. But we also should
dispatch replies for other pending requests. So,
nbd_connection_entry() is turned into nbd_receive_replies(), which
does reply dispatching while it receives other request headers, and
returns when it receives the requested header.
3. All old staff around drained sections and context switch is dropped.
In details:
- we don't need to move connection_co to new aio context, as we
don't have connection_co anymore
- we don't have a fake "request" of connection_co (extra increasing
in_flight), so don't care with it in drain_begin/end
- we don't stop reconnection during drained section anymore. This
means that drain_begin may wait for a long time (up to
reconnect_delay). But that's an improvement and more correct
behavior see below[*]
4. In nbd_teardown_connection() we don't have to wait for
connection_co, as it is dropped. And cleanup for s->ioc and nbd_yank
is moved here from removed connection_co.
5. In nbd_co_do_establish_connection() we now should handle
NBD_CLIENT_CONNECTING_NOWAIT: if new request comes when we are in
NBD_CLIENT_CONNECTING_NOWAIT, it still should call
nbd_co_establish_connection() (who knows, maybe the connection was
already established by another thread in the background). But we
shouldn't wait: if nbd_co_establish_connection() can't return new
channel immediately the request should fail (we are in
NBD_CLIENT_CONNECTING_NOWAIT state).
6. nbd_reconnect_attempt() is simplified: it's now easier to wait for
other requests in the caller, so here we just assert that fact.
Also delay time is now initialized here: we can easily detect first
attempt and start a timer.
7. nbd_co_reconnect_loop() is dropped, we don't need it. Reconnect
retries are fully handle by thread (nbd/client-connection.c), delay
timer we initialize in nbd_reconnect_attempt(), we don't have to
bother with s->drained and friends. nbd_reconnect_attempt() now
called from nbd_co_send_request().
8. nbd_connection_entry is dropped: reconnect is now handled by
nbd_co_send_request(), receiving reply is now handled by
nbd_receive_replies(): all handled from request coroutines.
9. So, welcome new nbd_receive_replies() called from request coroutine,
that receives reply header instead of nbd_connection_entry().
Like with sending requests, only one coroutine may receive in a
moment. So we introduce receive_mutex, which is locked around
nbd_receive_reply(). It also protects some related fields. Still,
full audit of thread-safety in nbd driver is a separate task.
New function waits for a reply with specified handle being received
and works rather simple:
Under mutex:
- if current handle is 0, do receive by hand. If another handle
received - switch to other request coroutine, release mutex and
yield. Otherwise return success
- if current handle == requested handle, we are done
- otherwise, release mutex and yield
10: in nbd_co_send_request() we now do nbd_reconnect_attempt() if
needed. Also waiting in free_sema queue we now wait for one of two
conditions:
- connectED, in_flight < MAX_NBD_REQUESTS (so we can start new one)
- connectING, in_flight == 0, so we can call
nbd_reconnect_attempt()
And this logic is protected by s->send_mutex
Also, on failure we don't have to care of removed s->connection_co
11. nbd_co_do_receive_one_chunk(): now instead of yield() and wait for
s->connection_co we just call new nbd_receive_replies().
12. nbd_co_receive_one_chunk(): place where s->reply.handle becomes 0,
which means that handling of the whole reply is finished. Here we
need to wake one of coroutines sleeping in nbd_receive_replies().
If none are sleeping - do nothing. That's another behavior change: we
don't have endless recv() in the idle time. It may be considered as
a drawback. If so, it may be fixed later.
13. nbd_reply_chunk_iter_receive(): don't care about removed
connection_co, just ping in_flight waiters.
14. Don't create connection_co, enable retry in the connection thread
(we don't have own reconnect loop anymore)
15. We now need to add a nbd_co_establish_connection_cancel() call in
nbd_cancel_in_flight(), to cancel the request that is doing a
connection attempt.
[*], ok, now we don't cancel reconnect on drain begin. That's correct:
reconnect feature leads to possibility of long-running requests (up
to reconnect delay). Still, drain begin is not a reason to kill
long requests. We should wait for them.
This also means, that we can again reproduce a dead-lock, described
in 8c517de24a8a1dcbeb54e7e12b5b0fda42a90ace.
Why we are OK with it:
1. Now this is not absolutely-dead dead-lock: the vm is unfrozen
after reconnect delay. Actually 8c517de24a8a1dc fixed a bug in
NBD logic, that was not described in 8c517de24a8a1dc and led to
forever dead-lock. The problem was that nobody woke the free_sema
queue, but drain_begin can't finish until there is a request in
free_sema queue. Now we have a reconnect delay timer that works
well.
2. It's not a problem of the NBD driver, but of the ide code,
because it does drain_begin under the global mutex; the problem
doesn't reproduce when using scsi instead of ide.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210902103805.25686-5-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[eblake: grammar and comment tweaks]
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: 9625a9a1f0248cc3c8143c9813ea4a63b14b7bb8
https://github.com/qemu/qemu/commit/9625a9a1f0248cc3c8143c9813ea4a63b14b7bb8
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M block/nbd.c
Log Message:
-----------
block/nbd: check that received handle is valid
If we don't have active request, that waiting for this handle to be
received, we should report an error.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20210902103805.25686-6-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: 3cb015ad05c7c1e07e0deb356cd20e6cd765c0ea
https://github.com/qemu/qemu/commit/3cb015ad05c7c1e07e0deb356cd20e6cd765c0ea
Author: Richard W.M. Jones <rjones@redhat.com>
Date: 2021-09-27 (Mon, 27 Sep 2021)
Changed paths:
M configure
M meson.build
M meson_options.txt
M qemu-nbd.c
M tests/docker/dockerfiles/centos8.docker
M tests/docker/dockerfiles/fedora.docker
M tests/docker/dockerfiles/opensuse-leap.docker
M tests/docker/dockerfiles/ubuntu1804.docker
M tests/docker/dockerfiles/ubuntu2004.docker
Log Message:
-----------
nbd/server: Add --selinux-label option
Under SELinux, Unix domain sockets have two labels. One is on the
disk and can be set with commands such as chcon(1). There is a
different label stored in memory (called the process label). This can
only be set by the process creating the socket. When using SELinux +
SVirt and wanting qemu to be able to connect to a qemu-nbd instance,
you must set both labels correctly first.
For qemu-nbd the options to set the second label are awkward. You can
create the socket in a wrapper program and then exec into qemu-nbd.
Or you could try something with LD_PRELOAD.
This commit adds the ability to set the label straightforwardly on the
command line, via the new --selinux-label flag. (The name of the flag
is the same as the equivalent nbdkit option.)
A worked example showing how to use the new option can be found in
this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1984938
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1984938
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20210723103303.1731437-2-rjones@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
[eblake: Fail if option is used when not compiled in]
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit: e75f2b239da9ecc8dd630f8f206759ee77923dab
https://github.com/qemu/qemu/commit/e75f2b239da9ecc8dd630f8f206759ee77923dab
Author: Peter Maydell <peter.maydell@linaro.org>
Date: 2021-09-28 (Tue, 28 Sep 2021)
Changed paths:
M block/blkdebug.c
M block/blklogwrites.c
M block/blkreplay.c
M block/blkverify.c
M block/bochs.c
M block/cloop.c
M block/commit.c
M block/copy-before-write.c
M block/copy-on-read.c
M block/crypto.c
M block/curl.c
M block/dmg.c
M block/file-posix.c
M block/file-win32.c
M block/filter-compress.c
M block/gluster.c
M block/io.c
M block/iscsi.c
M block/mirror.c
M block/nbd.c
M block/nfs.c
M block/null.c
M block/nvme.c
M block/preallocate.c
M block/qcow.c
M block/qcow2-cluster.c
M block/qcow2.c
M block/qed.c
M block/quorum.c
M block/raw-format.c
M block/rbd.c
M block/throttle.c
M block/trace-events
M block/vdi.c
M block/vmdk.c
M block/vpc.c
M block/vvfat.c
M configure
M docs/tools/qemu-nbd.rst
M include/block/block_int.h
M meson.build
M meson_options.txt
M nbd/client-connection.c
M nbd/client.c
M nbd/server.c
M qemu-nbd.c
M tests/docker/dockerfiles/centos8.docker
M tests/docker/dockerfiles/fedora.docker
M tests/docker/dockerfiles/opensuse-leap.docker
M tests/docker/dockerfiles/ubuntu1804.docker
M tests/docker/dockerfiles/ubuntu2004.docker
M tests/unit/test-bdrv-drain.c
M tests/unit/test-block-iothread.c
Log Message:
-----------
Merge remote-tracking branch 'remotes/ericb/tags/pull-nbd-2021-09-27' into
staging
nbd patches for 2021-09-27
- Richard W.M. Jones: Add --selinux-label option to qemu-nbd
- Vladimir Sementsov-Ogievskiy: Rework coroutines of qemu NBD client
to improve reconnect support
- Eric Blake: Relax server in regards to NBD_OPT_LIST_META_CONTEXT
- Vladimir Sementsov-Ogievskiy: Plumb up 64-bit bulk-zeroing support
in block layer, in preparation for future NBD spec extensions
- Nir Soffer: Default to writeback cache in qemu-nbd
# gpg: Signature made Mon 27 Sep 2021 22:54:26 BST
# gpg: using RSA key 71C2CC22B1C4602927D2F3AAA7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>" [full]
# gpg: aka "Eric Blake (Free Software Programmer)
<ebb9@byu.net>" [full]
# gpg: aka "[jpeg image of size 6874]" [full]
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2021-09-27:
nbd/server: Add --selinux-label option
block/nbd: check that received handle is valid
block/nbd: drop connection_co
block/nbd: refactor nbd_recv_coroutines_wake_all()
block/nbd: move nbd_recv_coroutines_wake_all() up
block/nbd: nbd_channel_error() shutdown channel unconditionally
nbd/client-connection: nbd_co_establish_connection(): fix non set errp
nbd/server: Allow LIST_META_CONTEXT without STRUCTURED_REPLY
block/io: allow 64bit discard requests
block: use int64_t instead of int in driver discard handlers
block: make BlockLimits::max_pdiscard 64bit
block/io: allow 64bit write-zeroes requests
block: use int64_t instead of int in driver write_zeroes handlers
block: make BlockLimits::max_pwrite_zeroes 64bit
block: use int64_t instead of uint64_t in copy_range driver handlers
block: use int64_t instead of uint64_t in driver write handlers
block: use int64_t instead of uint64_t in driver read handlers
qcow2: check request on vmstate save/load path
block/io: bring request check to bdrv_co_(read,write)v_vmstate
qemu-nbd: Change default cache mode to writeback
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Compare: https://github.com/qemu/qemu/compare/6b54a31bf7b4...e75f2b239da9
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [Qemu-commits] [qemu/qemu] 1321a0: qemu-nbd: Change default cache mode to writeback,
Peter Maydell <=