qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH V2 08/10] physmem: Add helper function to destroy CPU Address


From: Salil Mehta
Subject: RE: [PATCH V2 08/10] physmem: Add helper function to destroy CPU AddressSpace
Date: Tue, 3 Oct 2023 11:54:26 +0000

Hi Gavin,

> From: Gavin Shan <gshan@redhat.com>
> Sent: Tuesday, October 3, 2023 2:37 AM
> To: Salil Mehta <salil.mehta@huawei.com>; qemu-devel@nongnu.org; qemu-
> arm@nongnu.org
> Cc: maz@kernel.org; jean-philippe@linaro.org; Jonathan Cameron
> <jonathan.cameron@huawei.com>; lpieralisi@kernel.org;
> peter.maydell@linaro.org; richard.henderson@linaro.org;
> imammedo@redhat.com; andrew.jones@linux.dev; david@redhat.com;
> philmd@linaro.org; eric.auger@redhat.com; oliver.upton@linux.dev;
> pbonzini@redhat.com; mst@redhat.com; will@kernel.org; rafael@kernel.org;
> alex.bennee@linaro.org; linux@armlinux.org.uk;
> darren@os.amperecomputing.com; ilkka@os.amperecomputing.com;
> vishnu@os.amperecomputing.com; karl.heubaum@oracle.com;
> miguel.luis@oracle.com; salil.mehta@opnsrc.net; zhukeqian
> <zhukeqian1@huawei.com>; wangxiongfeng (C) <wangxiongfeng2@huawei.com>;
> wangyanan (Y) <wangyanan55@huawei.com>; jiakernel2@gmail.com;
> maobibo@loongson.cn; lixianglai@loongson.cn; Linuxarm <linuxarm@huawei.com>
> Subject: Re: [PATCH V2 08/10] physmem: Add helper function to destroy CPU
> AddressSpace
> 
> On 9/30/23 10:19, Salil Mehta wrote:
> > Virtual CPU Hot-unplug leads to unrealization of a CPU object. This also
> > involves destruction of the CPU AddressSpace. Add common function to help
> > destroy the CPU AddressSpace.
> >
> > Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
> > ---
> >   include/exec/cpu-common.h |  8 ++++++++
> >   include/hw/core/cpu.h     |  1 +
> >   softmmu/physmem.c         | 25 +++++++++++++++++++++++++
> >   3 files changed, 34 insertions(+)
> >
> > diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
> > index 41788c0bdd..eb56a228a2 100644
> > --- a/include/exec/cpu-common.h
> > +++ b/include/exec/cpu-common.h
> > @@ -120,6 +120,14 @@ size_t qemu_ram_pagesize_largest(void);
> >    */
> >   void cpu_address_space_init(CPUState *cpu, int asidx,
> >                               const char *prefix, MemoryRegion *mr);
> > +/**
> > + * cpu_address_space_destroy:
> > + * @cpu: CPU for which address space needs to be destroyed
> > + * @asidx: integer index of this address space
> > + *
> > + * Note that with KVM only one address space is supported.
> > + */
> > +void cpu_address_space_destroy(CPUState *cpu, int asidx);
> >
> >   void cpu_physical_memory_rw(hwaddr addr, void *buf,
> >                               hwaddr len, bool is_write);
> > diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
> > index 648b5b3586..65d2ae4581 100644
> > --- a/include/hw/core/cpu.h
> > +++ b/include/hw/core/cpu.h
> > @@ -355,6 +355,7 @@ struct CPUState {
> >       QSIMPLEQ_HEAD(, qemu_work_item) work_list;
> >
> >       CPUAddressSpace *cpu_ases;
> > +    int cpu_ases_count;
> >       int num_ases;
> >       AddressSpace *as;
> >       MemoryRegion *memory;
> 
> @num_ases and @cpu_ases_count are duplicate to each other to some extent.
> The real problem is @cpu_ases is allocated at once and we need to make the
> allocation sparse. In that way, each CPU address space is independent and can 
> be
> destroyed independently. The sparse allocation for the CPU address space can 
> be done
> in cpu_address_space_init() like below:


Well, I think Phil/Richard are pointing to something else here i.e. destroy
at once. 

https://lore.kernel.org/qemu-devel/594b2550-9a73-684f-6e54-29401dc6cd7a@linaro.org/


I used reference counter because I was not sure if it is safe to
assume that AddressSpace will always be destroyed at once.

BTW, there is no functional problem with above patch. Just that we
do not have to maintain the reference counter.

You can use below command to check CPU memory address spaces are getting
destroyed properly and get re-created again with CPU hot(un)plug:

Qemu> info mtree


Thanks
Salil.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]