qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 2/5] parallels.txt: fix bitmap L1 table description


From: Vladimir Sementsov-Ogievskiy
Subject: [PATCH 2/5] parallels.txt: fix bitmap L1 table description
Date: Tue, 16 Feb 2021 19:45:24 +0300

Actually L1 table entry offset is in 512 bytes sectors. Fix the spec.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 docs/interop/parallels.txt | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/docs/interop/parallels.txt b/docs/interop/parallels.txt
index f15bf35bd1..ebbdd1b25b 100644
--- a/docs/interop/parallels.txt
+++ b/docs/interop/parallels.txt
@@ -209,15 +209,14 @@ of its data area are:
               The number of entries in the L1 table of the bitmap.
 
   variable:   l1_table (8 * l1_size bytes)
-              L1 offset table (in bytes)
 
 A dirty bitmap is stored using a one-level structure for the mapping to host
-clusters - an L1 table.
+clusters - an L1 table. Each L1 table entry is a 64bit integer described
+below:
 
-Given an offset in bytes into the bitmap data, the offset in bytes into the
-image file can be obtained as follows:
+Given an offset in bytes into the bitmap data, corresponding L1 entry is
 
-    offset = l1_table[offset / cluster_size] + (offset % cluster_size)
+    l1_table[offset / cluster_size]
 
 If an L1 table entry is 0, the corresponding cluster of the bitmap is assumed
 to be zero.
@@ -225,4 +224,8 @@ to be zero.
 If an L1 table entry is 1, the corresponding cluster of the bitmap is assumed
 to have all bits set.
 
-If an L1 table entry is not 0 or 1, it allocates a cluster from the data area.
+If an L1 table entry is not 0 or 1, it contains corresponding cluster offset
+(in 512b sectors). Given an offset in bytes into the bitmap data the offset in
+bytes into the image file can be obtained as follows:
+
+    offset = l1_table[offset / cluster_size] * 512 + (offset % cluster_size)
-- 
2.29.2




reply via email to

[Prev in Thread] Current Thread [Next in Thread]