qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] tpm_emulator: Have swtpm relock storage upon migration f


From: Stefan Berger
Subject: Re: [PATCH 2/2] tpm_emulator: Have swtpm relock storage upon migration fall-back
Date: Fri, 26 Aug 2022 14:12:42 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.12.0



On 8/26/22 11:46, Stefan Berger wrote:
Swtpm may release the lock once the last one of its state blobs has been
migrated out. In case of VM migration failure QEMU now needs to notify
swtpm that it should again take the lock, which it can otherwise only do
once it has received the first TPM command from the VM.

Only try to send the lock command if swtpm supports it. It will not have
released the lock (and support shared storage setups) if it doesn't
support the locking command since the functionality of releasing the lock
upon state blob reception and the lock command were added to swtpm
'together'.

If QEMU sends the lock command and the storage has already been locked
no error is reported.

If swtpm does not receive the lock command (from older version of QEMU),
it will lock the storage once the first TPM command has been received. So
sending the lock command is an optimization.

Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
---
  backends/tpm/tpm_emulator.c | 59 ++++++++++++++++++++++++++++++++++++-
  backends/tpm/trace-events   |  2 ++
  2 files changed, 60 insertions(+), 1 deletion(-)

diff --git a/backends/tpm/tpm_emulator.c b/backends/tpm/tpm_emulator.c
index 87d061e9bb..debbdebd4c 100644
--- a/backends/tpm/tpm_emulator.c
+++ b/backends/tpm/tpm_emulator.c
@@ -34,6 +34,7 @@
  #include "io/channel-socket.h"
  #include "sysemu/tpm_backend.h"
  #include "sysemu/tpm_util.h"
+#include "sysemu/runstate.h"
  #include "tpm_int.h"
  #include "tpm_ioctl.h"
  #include "migration/blocker.h"
@@ -81,6 +82,9 @@ struct TPMEmulator {
      unsigned int established_flag_cached:1;
TPMBlobBuffers state_blobs;
+
+    bool relock_swtpm;
+    VMChangeStateEntry *vmstate;
  };
struct tpm_error {
@@ -302,6 +306,35 @@ static int tpm_emulator_stop_tpm(TPMBackend *tb)
      return 0;
  }
+static int tpm_emulator_lock_storage(TPMEmulator *tpm_emu)
+{
+    ptm_lockstorage pls;
+
+    if (!TPM_EMULATOR_IMPLEMENTS_ALL_CAPS(tpm_emu, PTM_CAP_LOCK_STORAGE)) {
+        trace_tpm_emulator_lock_storage_cmd_not_supt();
+        return 0;
+    }
+
+    /* give failing side 100 * 10ms time to release lock */

FYI: By inducing a migration failure in the post_load function of this module on the migration destination side, the fallback required worst-case 16 loops (160ms; ~20 attempts to migrate) for swtpm on the source side to again be able to grab the lock. I am using NFS for shared storage.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]