qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v3 17/36] pflash_cfi01/tdx: Introduce ram_mode of pflash


From: Daniel P . Berrangé
Subject: Re: [RFC PATCH v3 17/36] pflash_cfi01/tdx: Introduce ram_mode of pflash for TDVF
Date: Tue, 22 Mar 2022 09:27:32 +0000
User-agent: Mutt/2.1.5 (2021-12-30)

On Mon, Mar 21, 2022 at 04:54:51PM +0800, Xiaoyao Li wrote:
> On 3/18/2022 10:07 PM, Philippe Mathieu-Daudé wrote:
> > Hi,
> > 
> > On 17/3/22 14:58, Xiaoyao Li wrote:
> > > TDX VM needs to boot with Trust Domain Virtual Firmware (TDVF). Unlike
> > > that OVMF is mapped as rom device, TDVF needs to be mapped as private
> > > memory. This is because TDX architecture doesn't provide read-only
> > > capability for VMM, and it doesn't support instruction emulation due
> > > to guest memory and registers are not accessible for VMM.
> > > 
> > > On the other hand, OVMF can work as TDVF, which is usually configured
> > > as pflash device in QEMU. To keep the same usage (QEMU parameter),
> > > introduce ram_mode to pflash for TDVF. When it's creating a TDX VM,
> > > ram_mode will be enabled automatically that map the firmware as RAM.
> > > 
> > > Note, this implies two things:
> > >   1. TDVF (OVMF) is not read-only (write-protected).
> > > 
> > >   2. It doesn't support non-volatile UEFI variables as what pflash
> > >      supports that the change to non-volatile UEFI variables won't get
> > >      synced back to backend vars.fd file.
> > > 
> > > Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
> > > ---
> > >   hw/block/pflash_cfi01.c | 25 ++++++++++++++++++-------
> > >   hw/i386/pc_sysfw.c      | 14 +++++++++++---
> > >   2 files changed, 29 insertions(+), 10 deletions(-)
> > 
> > If you don't need a pflash device, don't use it: simply map your nvram
> > region as ram in your machine. No need to clutter the pflash model like
> > that.
> 
> I know it's dirty to hack the pflash device. The purpose is to make the user
> interface unchanged that people can still use
> 
>       -drive if=pflash,format=raw,unit=0,file=/path/to/OVMF_CODE.fd
>         -drive if=pflash,format=raw,unit=1,file=/path/to/OVMF_VARS.fd

Note, that in the default pflash config, libvirt will set the 'readonly=on'
flag for OVMF_CODE.fd ie, it will use

    -drive if=pflash,format=raw,unit=0,file=/path/to/OVMF_CODE.fd,readonly=on
    -drive if=pflash,format=raw,unit=1,file=/path/to/OVMF_VARS.fd

IOW, we're requiring OVMF_CODE.fd is ROM, while OVMF_VARS.fd is NVRAM

IIUC, this patch here is changing the semantics of these args:

   - OVMF_CODE.fd is mapped as RAM, but IIUC, QEMU would still be
     prevented from writing to it due to readonly=on in the
     block layer

   - OVMF_VARS.fd is mapped as RAM, but IIUC you're saying that
     none the less, any writes don't propagate back into the file ?



Dealing with OVMF_VARS.fd first, I really wonder why you want to have
a OVMF_VARS.fd file at all, if you don't have writes propagated into
it ? It has no reason to exist if you're not writing to it.

IMHO the AmdSev build for OVMF gets this right by entirely disabling
the split OVMF_CODE.fd vs OVMF_VARS.fd, and just having a single
OVMF.fd file that is exposed read-only to the guest.

This is further represented in $QEMU.git/docs/interop/firmware.json
by marking the firmware as 'stateless', which apps like libvirt will
use to figure out what QEMU command line to pick.

IOW, if you don't want OVMF_VARS.fd to be written to, then follow
what AmdSev has done, and get rid of the split files.


As for exposing OVMF_CODE.fd as RAM instead of ROM. That feels a
little odd, but as long as its backing store file on disk honours
the readony=on request to -drive, that's not terrible IMHO.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]