qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 14/15] qmp: Include "reserve" property of memory backends


From: Markus Armbruster
Subject: Re: [PATCH v6 14/15] qmp: Include "reserve" property of memory backends
Date: Fri, 23 Apr 2021 14:13:25 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)

David Hildenbrand <david@redhat.com> writes:

> On 23.04.21 13:21, Markus Armbruster wrote:
>> David Hildenbrand <david@redhat.com> writes:
>> 
>>> On 23.04.21 13:00, Markus Armbruster wrote:
>>>> David Hildenbrand <david@redhat.com> writes:
>>>>
>>>>> Let's include the new property.
>>>>>
>>>>> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>> Cc: Eric Blake <eblake@redhat.com>
>>>>> Cc: Markus Armbruster <armbru@redhat.com>
>>>>> Cc: Igor Mammedov <imammedo@redhat.com>
>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>>> ---
>>>>>    hw/core/machine-qmp-cmds.c | 1 +
>>>>>    qapi/machine.json          | 4 ++++
>>>>>    2 files changed, 5 insertions(+)
>>>>>
>>>>> diff --git a/hw/core/machine-qmp-cmds.c b/hw/core/machine-qmp-cmds.c
>>>>> index d41db5b93b..2d135ecdd0 100644
>>>>> --- a/hw/core/machine-qmp-cmds.c
>>>>> +++ b/hw/core/machine-qmp-cmds.c
>>>>> @@ -175,6 +175,7 @@ static int query_memdev(Object *obj, void *opaque)
>>>>>            m->dump = object_property_get_bool(obj, "dump", &error_abort);
>>>>>            m->prealloc = object_property_get_bool(obj, "prealloc", 
>>>>> &error_abort);
>>>>>            m->share = object_property_get_bool(obj, "share", 
>>>>> &error_abort);
>>>>> +        m->reserve = object_property_get_bool(obj, "reserve", 
>>>>> &error_abort);
>>>>>            m->policy = object_property_get_enum(obj, "policy", 
>>>>> "HostMemPolicy",
>>>>>                                                 &error_abort);
>>>>>            host_nodes = object_property_get_qobject(obj,
>>>>> diff --git a/qapi/machine.json b/qapi/machine.json
>>>>> index 32650bfe9e..5932139d20 100644
>>>>> --- a/qapi/machine.json
>>>>> +++ b/qapi/machine.json
>>>>> @@ -798,6 +798,9 @@
>>>>>    #
>>>>>    # @share: whether memory is private to QEMU or shared (since 6.1)
>>>>>    #
>>>>> +# @reserve: whether swap space (or huge pages) was reserved if applicable
>>>>> +#           (since 6.1)
>>>>> +#
>>>>>    # @host-nodes: host nodes for its memory policy
>>>>>    #
>>>>>    # @policy: memory policy of memory backend
>>>>> @@ -812,6 +815,7 @@
>>>>>        'dump':       'bool',
>>>>>        'prealloc':   'bool',
>>>>>        'share':      'bool',
>>>>> +    'reserve':    'bool',
>>>>>        'host-nodes': ['uint16'],
>>>>>        'policy':     'HostMemPolicy' }}
>>>>
>>>> Double-checking: true means definitely reserved, and false means
>>>> definitely not reserved.  Correct?
>>>
>>> True means "reserved if applicable" which means "not reserved if not
>>> applicable". False means "definitely not reserved".
>>>
>>> (any recommendation how to rephrase are appreciated; I tried my best --
>>> this interface here makes it especially hard -- it's easier for the
>>> property itself)
>> 
>> When is it "applicable"?
>
> When the OS supports it for the memory type and it hasn't been disabled.
>
> Linux handling as described in
>   [PATCH v6 09/15] util/mmap-alloc: Support RAM_NORESERVE via
>   MAP_NORESERVE under Linux
> and
>
>   https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
>
> Summary *without* MAP_NORESERVE:
>
> a) !Hugetlbfs with Memory overcommit disabled via
>     ("/proc/sys/vm/overcommit_memory == 2"): never
>
> b) !Hugetlbfs with Memory overcommit enabled
>   1) Shared mappings on files: never
>
>   2) Private mappings on files: only when writable (for us always)
>
>   3) Shared anonymous memory: always
>
>   4) Private anonymous memory: only when writable (for us always)
>
> c) Hugetlbfs: Always
>
>
> Under Windows: always. On other POSIX besides Linux -- don't know.

Would working some of this into the doc comment help users of the
interface?  Up to you.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]