grub-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 2/2] mm: Better handling of adding new regions


From: Zhang Boyang
Subject: Re: [PATCH v4 2/2] mm: Better handling of adding new regions
Date: Mon, 7 Nov 2022 16:47:04 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.4.1

Hi,

On 2022/10/20 20:50, Daniel Kiper wrote:
On Sat, Oct 15, 2022 at 10:15:12PM +0800, Zhang Boyang wrote:
Previously, there are two problems of the handling of adding new

Please drop "Previously". It is not needed...

regions. First, it doesn't take region management cost into account.
Adding a memory area of `x` bytes will result in a heap region of less
than `x` bytes avaliable. Thus, the new region might not adequate for
current allocation request. Second, it doesn't pre-allocate a big region
for small allocations, so it might be slow for many small allocations.

This patch introduces GRUB_MM_MAX_COST to address the cost of region
management. The value of this constant should make
`grub_mm_init_region (addr, size + GRUB_MM_MAX_COST)` initialize a
region of at least `size` free bytes available. The size of heap growth
is now adjusted according this value before calling
grub_mm_add_region_fn ().

This should be separate patch...

This patch also introduces GRUB_MM_HEAP_GROW to address the minimal heap
growth granularity. When a small allocation encounters heap space
exhaustion, the size of heap growth is now expended to this value, and a
new region of bigger size will be added. Thus subsequent allocations can
use the remaining space of that new region.

... and this should be the second one, to be precise third patch in the 
series...


Will be fixed in next version.

Signed-off-by: Zhang Boyang <zhangboyang.id@gmail.com>
---
  grub-core/kern/mm.c       | 18 +++++++++++++++---
  include/grub/mm_private.h | 12 ++++++++++++
  2 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/grub-core/kern/mm.c b/grub-core/kern/mm.c
index ae2279133..fe630f061 100644
--- a/grub-core/kern/mm.c
+++ b/grub-core/kern/mm.c
@@ -410,7 +410,9 @@ grub_memalign (grub_size_t align, grub_size_t size)
  {
    grub_mm_region_t r;
    grub_size_t n = ((size + GRUB_MM_ALIGN - 1) >> GRUB_MM_ALIGN_LOG2) + 1;
+  grub_size_t guarantee;
    int count = 0;
+  grub_size_t grow;

    if (!grub_mm_base)
      goto fail;
@@ -418,10 +420,13 @@ grub_memalign (grub_size_t align, grub_size_t size)
    if (size > ~(grub_size_t) align)
      goto fail;

+  /* Allocation can always success if max continuous free chunk >= guarantee */
+  guarantee = size + align;
+
    /* We currently assume at least a 32-bit grub_size_t,
       so limiting allocations to <adress space size> - 1MiB
       in name of sanity is beneficial. */
-  if ((size + align) > ~(grub_size_t) 0x100000)
+  if (guarantee > ~(grub_size_t) 0x100000)
      goto fail;

    align = (align >> GRUB_MM_ALIGN_LOG2);
@@ -443,11 +448,18 @@ grub_memalign (grub_size_t align, grub_size_t size)
    switch (count)
      {
      case 0:
+      /* Calculate optimal size of heap growth. */
+      grow = grub_max(guarantee, GRUB_MM_HEAP_GROW);

Missing space before "(".

+
+      /* Adjust heap growth to address the cost of region management. */
+      if (grub_add(grow, GRUB_MM_MAX_COST, &grow))

Ditto...

+       goto fail;
+
        /* Request additional pages, contiguous */
        count++;

        if (grub_mm_add_region_fn != NULL &&
-          grub_mm_add_region_fn (size, GRUB_MM_ADD_REGION_CONSECUTIVE) == 
GRUB_ERR_NONE)
+          grub_mm_add_region_fn (grow, GRUB_MM_ADD_REGION_CONSECUTIVE) == 
GRUB_ERR_NONE)
        goto again;

        /* fallthrough  */
@@ -462,7 +474,7 @@ grub_memalign (grub_size_t align, grub_size_t size)
             * Try again even if this fails, in case it was able to partially
             * satisfy the request
             */
-          grub_mm_add_region_fn (size, GRUB_MM_ADD_REGION_NONE);
+          grub_mm_add_region_fn (grow, GRUB_MM_ADD_REGION_NONE);
            goto again;
          }

diff --git a/include/grub/mm_private.h b/include/grub/mm_private.h
index 96c2d816b..a01ba2507 100644
--- a/include/grub/mm_private.h
+++ b/include/grub/mm_private.h
@@ -95,6 +95,18 @@ typedef struct grub_mm_region
  }
  *grub_mm_region_t;

+/*
+ * Set an upper bound of management cost of each region,
+ * with sizeof(struct grub_mm_region), sizeof(struct grub_mm_header) and
+ * any possible alignment or padding taken into account.

OK...

+ * The value should make `grub_mm_init_region (addr, size + GRUB_MM_MAX_COST)`
+ * initialize a region of at least `size` free bytes available.
+ */
+#define GRUB_MM_MAX_COST 0x1000

... but why do not you use sizeof() here then?

There are complex alignments which are hard to predict (although it's indeed possible). These alignments are determined by not only "size" but also "addr". Calculating a tight upper bound value will be complex and error prone. So I think it's better to give a sufficiently large value, so complex calculations are avoided.


+
+/* Minimal heap growth granularity, when existing heap space is exhausted. */
+#define GRUB_MM_HEAP_GROW 0x100000

It would be nice if you shortly explain in the comment before why you
come up with this value.


It was chosen almost arbitrarily. However, if this value is too small, the cost of small memory allocations will be expensive (more calls to grub_mm_add_region_fn(), which might be slow). If this value is too large, a lot of memory will be wasted and it might cause out-of-memory on machines with small amount of RAM.

Daniel

Best Regards,
Zhang Boyang



reply via email to

[Prev in Thread] Current Thread [Next in Thread]