[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH V2] docs: vhost-user: Add Xen specific memory mapping support
From: |
Viresh Kumar |
Subject: |
[PATCH V2] docs: vhost-user: Add Xen specific memory mapping support |
Date: |
Mon, 6 Mar 2023 16:40:24 +0530 |
The current model of memory mapping at the back-end works fine where a
standard call to mmap() (for the respective file descriptor) is enough
before the front-end can start accessing the guest memory.
There are other complex cases though where the back-end needs more
information and simple mmap() isn't enough. For example Xen, a type-1
hypervisor, currently supports memory mapping via two different methods,
foreign-mapping (via /dev/privcmd) and grant-dev (via /dev/gntdev). In
both these cases, the back-end needs to call mmap() and ioctl(), and
need to pass extra information via the ioctl(), like the Xen domain-id
of the guest whose memory we are trying to map.
Add a new protocol feature, 'VHOST_USER_PROTOCOL_F_XEN_MMAP', which lets
the back-end know about the additional memory mapping requirements.
When this feature is negotiated, the front-end can send the
'VHOST_USER_SET_XEN_MMAP' message type to provide the additional
information to the back-end.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
V1->V2:
- Make the custom mmap feature Xen specific, instead of being generic.
- Clearly define which memory regions are impacted by this change.
- Allow VHOST_USER_SET_XEN_MMAP to be called multiple times.
- Additional Bit(2) property in flags.
docs/interop/vhost-user.rst | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
index 3f18ab424eb0..8be5f5eae941 100644
--- a/docs/interop/vhost-user.rst
+++ b/docs/interop/vhost-user.rst
@@ -258,6 +258,24 @@ Inflight description
:queue size: a 16-bit size of virtqueues
+Xen mmap description
+^^^^^^^^^^^^^^^^^^^^
+
++-------+-------+
+| flags | domid |
++-------+-------+
+
+:flags: 64-bit bit field
+
+- Bit 0 is set for Xen foreign memory memory mapping.
+- Bit 1 is set for Xen grant memory memory mapping.
+- Bit 2 is set if the back-end can directly map additional memory (like
+ descriptor buffers or indirect descriptors, which aren't part of already
+ shared memory regions) without the need of front-end sending an additional
+ memory region first.
+
+:domid: a 64-bit Xen hypervisor specific domain id.
+
C structure
-----------
@@ -867,6 +885,7 @@ Protocol features
#define VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS 14
#define VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS 15
#define VHOST_USER_PROTOCOL_F_STATUS 16
+ #define VHOST_USER_PROTOCOL_F_XEN_MMAP 17
Front-end message types
-----------------------
@@ -1422,6 +1441,23 @@ Front-end message types
query the back-end for its device status as defined in the Virtio
specification.
+``VHOST_USER_SET_XEN_MMAP``
+ :id: 41
+ :equivalent ioctl: N/A
+ :request payload: Xen mmap description
+ :reply payload: N/A
+
+ When the ``VHOST_USER_PROTOCOL_F_XEN_MMAP`` protocol feature has been
+ successfully negotiated, this message is submitted by the front-end to set
the
+ Xen hypervisor specific memory mapping configurations at the back-end. These
+ configurations should be used to mmap memory regions, virtqueues, descriptors
+ and descriptor buffers. The front-end must send this message before any
+ memory-regions are sent to the back-end via ``VHOST_USER_SET_MEM_TABLE`` or
+ ``VHOST_USER_ADD_MEM_REG`` message types. The front-end can send this message
+ multiple times, if different mmap configurations are required for different
+ memory regions, where the most recent ``VHOST_USER_SET_XEN_MMAP`` must be
used
+ by the back-end to map any newly shared memory regions.
+
Back-end message types
----------------------
--
2.31.1.272.g89b43f80a514
- [PATCH V2] docs: vhost-user: Add Xen specific memory mapping support,
Viresh Kumar <=