qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v13 0/7] s390x: CPU Topology


From: Janis Schoetterl-Glausch
Subject: Re: [PATCH v13 0/7] s390x: CPU Topology
Date: Tue, 13 Dec 2022 16:31:35 +0100
User-agent: Evolution 3.46.1 (3.46.1-1.fc37)

On Tue, 2022-12-13 at 16:12 +0100, Christian Borntraeger wrote:
> 
> Am 13.12.22 um 14:50 schrieb Christian Borntraeger:
> > 
> > 
> > Am 12.12.22 um 11:01 schrieb Pierre Morel:
> > > 
> > > 
> > > On 12/9/22 15:45, Cédric Le Goater wrote:
> > > > On 12/8/22 10:44, Pierre Morel wrote:
> > > > > Hi,
> > > > > 
> > > > > Implementation discussions
> > > > > ==========================
> > > > > 
> > > > > CPU models
> > > > > ----------
> > > > > 
> > > > > Since the S390_FEAT_CONFIGURATION_TOPOLOGY is already in the CPU model
> > > > > for old QEMU we could not activate it as usual from KVM but needed
> > > > > a KVM capability: KVM_CAP_S390_CPU_TOPOLOGY.
> > > > > Checking and enabling this capability enables
> > > > > S390_FEAT_CONFIGURATION_TOPOLOGY.
> > > > > 
> > > > > Migration
> > > > > ---------
> > > > > 
> > > > > Once the S390_FEAT_CONFIGURATION_TOPOLOGY is enabled in the source
> > > > > host the STFL(11) is provided to the guest.
> > > > > Since the feature is already in the CPU model of older QEMU,
> > > > > a migration from a new QEMU enabling the topology to an old QEMU
> > > > > will keep STFL(11) enabled making the guest get an exception for
> > > > > illegal operation as soon as it uses the PTF instruction.
> > > > > 
> > > > > A VMState keeping track of the S390_FEAT_CONFIGURATION_TOPOLOGY
> > > > > allows to forbid the migration in such a case.
> > > > > 
> > > > > Note that the VMState will be used to hold information on the
> > > > > topology once we implement topology change for a running guest.
> > > > > 
> > > > > Topology
> > > > > --------
> > > > > 
> > > > > Until we introduce bookss and drawers, polarization and dedication
> > > > > the topology is kept very simple and is specified uniquely by
> > > > > the core_id of the vCPU which is also the vCPU address.
> > > > > 
> > > > > Testing
> > > > > =======
> > > > > 
> > > > > To use the QEMU patches, you will need Linux V6-rc1 or newer,
> > > > > or use the following Linux mainline patches:
> > > > > 
> > > > > f5ecfee94493 2022-07-20 KVM: s390: resetting the 
> > > > > Topology-Change-Report
> > > > > 24fe0195bc19 2022-07-20 KVM: s390: guest support for topology function
> > > > > 0130337ec45b 2022-07-20 KVM: s390: Cleanup ipte lock access and SIIF 
> > > > > fac..
> > > > > 
> > > > > Currently this code is for KVM only, I have no idea if it is 
> > > > > interesting
> > > > > to provide a TCG patch. If ever it will be done in another series.
> > > > > 
> > > > > Documentation
> > > > > =============
> > > > > 
> > > > > To have a better understanding of the S390x CPU Topology and its
> > > > > implementation in QEMU you can have a look at the documentation in the
> > > > > last patch of this series.
> > > > > 
> > > > > The admin will want to match the host and the guest topology, taking
> > > > > into account that the guest does not recognize multithreading.
> > > > > Consequently, two vCPU assigned to threads of the same real CPU should
> > > > > preferably be assigned to the same socket of the guest machine.
> > > > > 
> > > > > Future developments
> > > > > ===================
> > > > > 
> > > > > Two series are actively prepared:
> > > > > - Adding drawers, book, polarization and dedication to the vCPU.
> > > > > - changing the topology with a running guest
> > > > 
> > > > Since we have time before the next QEMU 8.0 release, could you please
> > > > send the whole patchset ? Having the full picture would help in taking
> > > > decisions for downstream also.
> > > > 
> > > > I am still uncertain about the usefulness of S390Topology object 
> > > > because,
> > > > as for now, the state can be computed on the fly, the reset can be
> > > > handled at the machine level directly under s390_machine_reset() and so
> > > > could migration if the machine had a vmstate (not the case today but
> > > > quite easy to add). So before merging anything that could be difficult
> > > > to maintain and/or backport, I would prefer to see it all !
> > > > 
> > > 
> > > The goal of dedicating an object for topology was to ease the maintenance 
> > > and portability by using the QEMU object framework.
> > > 
> > > If on contrary it is a problem for maintenance or backport we surely 
> > > better not use it.
> > > 
> > > Any other opinion?
> > 
> > I agree with Cedric. There is no point in a topology object.
> > The state is calculated on the fly depending on the command line. This
> > would change if we ever implement the PTF horizontal/vertical state. But 
> > this
> > can then be another CPU state.
> > 
> > So I think we could go forward with this patch as a simple patch set that 
> > allows to
> > sets a static topology. This makes sense on its own, e.g. if you plan to 
> > pin your
> > vCPUs to given host CPUs.
> > 
> > Dynamic changes do come with CPU hotplug, not sure what libvirt does with 
> > new CPUs
> > in that case during migration. I assume those are re-generated on the 
> > target with
> > whatever topology was created on the source. So I guess this will just 
> > work, but
> > it would be good if we could test that.
> > 
> > A more fine-grained topology (drawer, book) could be added later or 
> > upfront. It
> > does require common code and libvirt enhancements, though.
> 
> Now I have discussed with Pierre and there IS a state that we want to migrate.
> The topology change state is a guest wide bit that might be still set when
> topology is changed->bit is set
> guest is not yet started
> migration
> 
> 2 options:
> 1. a vmstate with that bit and migrate the state
> 2. always set the topology change bit after migration

2. is the default behavior if you do nothing. VCPU creation on the target sets
the change bit to 1.
So 1. is only to prevent spurious topology change indication.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]