qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 2/3] memory: add memory_region_is_mapped_shared()


From: Stefan Hajnoczi
Subject: [PATCH 2/3] memory: add memory_region_is_mapped_shared()
Date: Mon, 22 Feb 2021 16:10:16 +0000

Add a function to query whether a memory region is mmap(MAP_SHARED).
This will be used to check that vhost-user memory regions can be shared
with the device backend process in the next patch.

An inline function in "exec/memory.h" would have been nice but RAMBlock
fields are only accessible from memory.c (see "exec/ramblock.h").

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 include/exec/memory.h | 11 +++++++++++
 softmmu/memory.c      |  6 ++++++
 2 files changed, 17 insertions(+)

diff --git a/include/exec/memory.h b/include/exec/memory.h
index c6fb714e49..7b7dbe9fd0 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -2457,6 +2457,17 @@ static inline bool memory_access_is_direct(MemoryRegion 
*mr, bool is_write)
     }
 }
 
+/**
+ * memory_region_is_mapped_shared: check whether a memory region is
+ * mmap(MAP_SHARED)
+ *
+ * Returns %true is a memory region is mmap(MAP_SHARED). This is always false
+ * on memory regions that do not support memory_region_get_ram_ptr().
+ *
+ * @mr: the memory region being queried
+ */
+bool memory_region_is_mapped_shared(MemoryRegion *mr);
+
 /**
  * address_space_read: read from an address space.
  *
diff --git a/softmmu/memory.c b/softmmu/memory.c
index 874a8fccde..e6631e5d4c 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -1809,6 +1809,12 @@ bool memory_region_is_ram_device(MemoryRegion *mr)
     return mr->ram_device;
 }
 
+bool memory_region_is_mapped_shared(MemoryRegion *mr)
+{
+    return memory_access_is_direct(mr, false) &&
+           (mr->ram_block->flags & RAM_SHARED);
+}
+
 uint8_t memory_region_get_dirty_log_mask(MemoryRegion *mr)
 {
     uint8_t mask = mr->dirty_log_mask;
-- 
2.29.2


reply via email to

[Prev in Thread] Current Thread [Next in Thread]