qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 03/38] pflash: Only read non-zero parts of backend image


From: Cédric Le Goater
Subject: Re: [PULL 03/38] pflash: Only read non-zero parts of backend image
Date: Tue, 7 Mar 2023 15:15:30 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.8.0

On 3/7/23 15:00, Kevin Wolf wrote:
Am 03.03.2023 um 23:51 hat Maciej S. Szmigiero geschrieben:
On 8.02.2023 12:19, Cédric Le Goater wrote:
On 2/7/23 13:48, Kevin Wolf wrote:
Am 07.02.2023 um 10:19 hat Cédric Le Goater geschrieben:
On 2/7/23 09:38, Kevin Wolf wrote:
Am 06.02.2023 um 16:54 hat Cédric Le Goater geschrieben:
On 1/20/23 13:25, Kevin Wolf wrote:
From: Xiang Zheng <zhengxiang9@huawei.com>

Currently we fill the VIRT_FLASH memory space with two 64MB NOR images
when using persistent UEFI variables on virt board. Actually we only use
a very small(non-zero) part of the memory while the rest significant
large(zero) part of memory is wasted.

So this patch checks the block status and only writes the non-zero part
into memory. This requires pflash devices to use sparse files for
backends.

Signed-off-by: Xiang Zheng <zhengxiang9@huawei.com>

[ kraxel: rebased to latest master ]

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-Id: <20221220084246.1984871-1-kraxel@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

This newly merged patch introduces a "regression" when booting an Aspeed
machine. The following extra m25p80 patch (not yet merged) is required
for the issue to show:

     https://lore.kernel.org/qemu-devel/20221115151000.2080833-1-clg@kaod.org/

U-Boot fails to find the filesystem in that case.

It can be easily reproduced with the witherspoon-bmc machine and seems
to be related to the use of a UBI filesystem. Other Aspeed machines not
using UBI are not impacted.

Here is a tentative fix. I don't know enough the block layer to explain
what is happening :/

I was puzzled for a moment, but...

@@ -39,7 +39,7 @@ static int blk_pread_nonzeroes(BlockBack
                return ret;
            }
            if (!(ret & BDRV_BLOCK_ZERO)) {
-            ret = bdrv_pread(bs->file, offset, bytes,

'bs->file' rather than 'bs' really looks wrong. I think replacing that
would already fix the bug you're seeing.

Just to be sure, how did you configure the block backend? bs->file would
happen to work more or less with raw over file-posix (which is probably
what Gerd tested), but I think it breaks with anything else.

The command is  :

    $ qemu-system-arm -M witherspoon-bmc -net user \
     -drive file=/path/to/file.mtd,format=raw,if=mtd \
     -nographic -serial mon:stdio -snapshot

If I remove '-snapshot', all works fine.

Ok, that makes sense then. -snapshot creates a temporary qcow2 overlay,
and then what your guest sees with bs->file is not the virtual disk
content of the qcow2 image, but the qcow2 file itself.

yes. Same symptom with pflash devices, TCG and KVM. The guest hangs with 
-snapshot.

C.

qemu-system-aarch64 -M virt -smp 2 -cpu max -accel tcg,thread=multi -nographic 
-m 2G -drive 
if=pflash,format=raw,file=/usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.raw,readonly=on
 -drive if=pflash,format=raw,file=rhel9-varstore.img -device 
virtio-net,netdev=net0,bus=pcie.0,addr=0x3 -netdev user,id=net0 -drive 
file=rhel9-arm64.qcow2,if=none,id=disk,format=qcow2,cache=none -device 
virtio-blk-device,drive=disk -serial mon:stdio -snapshot




+1 here for QEMU + KVM/x86: OVMF CODE file fails to load (is all zeroes)
with either "-snapshot" QEMU command line option or even with just "snapshot=on"
setting enabled on pflash0.

Reverting this patch seems to fix the issue.

Hm, so we know the fix, but nobody has submitted it as an actual patch?

Sorry. I thought the solution was more complex and got pulled in other
tasks ...

I'll send one...

Thanks,

C.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]