bug-gnulib
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: hash_rehash allocatino failure


From: Eric Blake
Subject: Re: hash_rehash allocatino failure
Date: Thu, 18 Jun 2009 23:22:49 +0000 (UTC)
User-agent: Loom/3.14 (http://gmane.org/)

Jim Meyering <jim <at> meyering.net> writes:

> > My first attempt used a full two-pass algorithm for the cleanup case.  With 
one
> > pass, the allocations and frees are intermingled, with two passes all frees
> > occur before any allocations.  However, I have been unable (so far) to 
contrive
> > any such hasher that would show a difference in worst-case pressure with the
> > intermingled operation.  So, for better cache locality, I'd rather avoid two
> > passes in the common case.  But we definitely want to code things to move 
the
> > overflow first and bucket head last within each bucket, as that is a trivial
> > rewrite with obvious benefits.
> 
> I agree.  Two-pass sounds expensive.

> > I think I've convinced myself that recovery is always possible without 
further
> > allocation, although I'm still unsure on whether there is ever anything 
where
> > the cleanup would require a two-pass algorithm because the one-pass approach
> > has higher pressure.

Unfortunately, I've answered my own question, since test-hash.c gave me a test 
case where the one-pass algorithm required more memory on recovery than on the 
initial failure path.  That rand() stuff in test-hash.c sure makes for some fun 
testing.  Thankfully, because we (intentionally) don't pre-seed rand() in test-
hash.c, I am getting reliably reproducible behavior on my machine.  On the 
other hand, rand() probably behaves differently on other systems, so I don't 
know how easy it would be for others to reproduce my state.

> > Do you want me to resurrect the bits of my first patch, and submit something
> > that recovers from failure rather than preventing it?
> 
> Sure!  Thanks.

This patch gets things to the state where I'm happy that the memory allocation 
failure recovery works correctly.  But as-is, this patch uses a one-pass 
algorithm, so my test case means the failure recovery can require more memory 
than the condition that put us into the failure situation in the first place.  
I have not yet had time to code up a two-pass algorithm, although I claim that 
it should solve the memory pressure.


From: Eric Blake <address@hidden>
Date: Mon, 8 Jun 2009 05:56:37 -0600
Subject: [PATCH] hash: avoid memory leak on allocation failure

* lib/hash.c: Document USE_OBSTACK memory management flaws.
(hash_rehash): Avoid memory leak on allocation failure.  Factor
repeated algorithm...
(transfer_entries): ...into new helper routine.
(hash_delete): React to hash_rehash return value.

Signed-off-by: Eric Blake <address@hidden>
---
 ChangeLog  |    7 ++
 lib/hash.c |  218 ++++++++++++++++++++++++++++++++++++++++++------------------
 2 files changed, 161 insertions(+), 64 deletions(-)

diff --git a/ChangeLog b/ChangeLog
index 70de83e..41bdee4 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,12 @@
 2009-06-18  Eric Blake  <address@hidden>

+       hash: avoid memory leak on allocation failure
+       * lib/hash.c: Document USE_OBSTACK memory management flaws.
+       (hash_rehash): Avoid memory leak on allocation failure.  Factor
+       repeated algorithm...
+       (transfer_entries): ...into new helper routine.
+       (hash_delete): React to hash_rehash return value.
+
        hash: make rotation more obvious
        * modules/hash (Depends-on): Add bitrotate and stdint.
        * lib/bitrotate.h (rotl_sz, rotr_sz): New functions.
diff --git a/lib/hash.c b/lib/hash.c
index a3cb07d..6443415 100644
--- a/lib/hash.c
+++ b/lib/hash.c
@@ -21,7 +21,8 @@
 /* A generic hash table package.  */

 /* Define USE_OBSTACK to 1 if you want the allocator to use obstacks instead
-   of malloc.  If you change USE_OBSTACK, you have to recompile!  */
+   of malloc.  If you change USE_OBSTACK, you have to recompile!  Also,
+   memory allocation failures in the obstack are not reliably tracked.  */

 #include <config.h>

@@ -836,6 +837,89 @@ hash_find_entry (Hash_table *table, const void *entry,
   return NULL;
 }

+/* Internal helper, to move entries from SRC to DST.  Both tables must
+   share the same free entry list.  Return false if the free entry
+   list is exhausted and an allocation fails.  */
+
+static bool
+transfer_entries (Hash_table *src, Hash_table *dst)
+{
+  struct hash_entry *bucket;
+  struct hash_entry *cursor;
+  struct hash_entry *next;
+  for (bucket = src->bucket; bucket < src->bucket_limit; bucket++)
+    if (bucket->data)
+      {
+       void *data;
+       struct hash_entry *new_bucket;
+
+       /* Within each bucket, transfer overflow entries first and
+          then the bucket head, to minimize memory pressure.  After
+          all, the only time we might allocate is when moving the
+          bucket head, but moving overflow entries first may create
+          free entries that can be recycled by the time we finally
+          get to the bucket head.  */
+       for (cursor = bucket->next; cursor; cursor = next)
+         {
+           data = cursor->data;
+           new_bucket = (dst->bucket + dst->hasher (data, dst->n_buckets));
+
+           if (! (new_bucket < dst->bucket_limit))
+             abort ();
+
+           next = cursor->next;
+
+           if (new_bucket->data)
+             {
+               /* Merely relink an existing entry, when moving from a
+                  bucket overflow into a bucket overflow.  */
+               cursor->next = new_bucket->next;
+               new_bucket->next = cursor;
+             }
+           else
+             {
+               /* Free an existing entry, when moving from a bucket
+                  overflow into a bucket header.  */
+               new_bucket->data = data;
+               dst->n_buckets_used++;
+               free_entry (dst, cursor);
+             }
+         }
+       /* Now move the bucket head.  Be sure that if we fail due to
+           allocation failure that the src table is in a consistent
+           state.  */
+       data = bucket->data;
+       bucket->next = NULL;
+       new_bucket = (dst->bucket + dst->hasher (data, dst->n_buckets));
+
+       if (! (new_bucket < dst->bucket_limit))
+         abort ();
+
+       if (new_bucket->data)
+         {
+           /* Allocate or recycle an entry, when moving from a bucket
+              header into a bucket overflow.  */
+           struct hash_entry *new_entry = allocate_entry (dst);
+
+           if (new_entry == NULL)
+             return false;
+
+           new_entry->data = data;
+           new_entry->next = new_bucket->next;
+           new_bucket->next = new_entry;
+         }
+       else
+         {
+           /* Move from one bucket header to another.  */
+           new_bucket->data = data;
+           dst->n_buckets_used++;
+         }
+       bucket->data = NULL;
+       src->n_buckets_used--;
+      }
+  return true;
+}
+
 /* For an already existing hash table, change the number of buckets through
    specifying CANDIDATE.  The contents of the hash table are preserved.  The
    new number of buckets is automatically selected so as to _guarantee_ that
@@ -848,9 +932,6 @@ bool
 hash_rehash (Hash_table *table, size_t candidate)
 {
   Hash_table *new_table;
-  struct hash_entry *bucket;
-  struct hash_entry *cursor;
-  struct hash_entry *next;

   new_table = hash_initialize (candidate, table->tuning, table->hasher,
                               table->comparator, table->data_freer);
@@ -862,6 +943,20 @@ hash_rehash (Hash_table *table, size_t candidate)
       return true;
     }

+  /* In order for the transfer to successfully complete, we need
+     additional overflow entries when distinct buckets in the old
+     table collide into a common bucket in the new table.  The worst
+     case possible is a hasher that gives a good spread with the old
+     size, but returns a constant with the new size; if we were to
+     guarantee table->n_buckets_used-1 free entries in advance, then
+     the transfer would be guaranteed to not allocate memory.
+     However, for large tables, a guarantee of no further allocation
+     introduces a lot of extra memory pressure, all for an unlikely
+     corner case (most rehashes reduce, rather than increase, the
+     number of overflow entries needed).  So, we instead ensure that
+     the transfer process can be reversed if we hit a memory
+     allocation failure mid-transfer.  */
+
   /* Merely reuse the extra old space into the new table.  */
 #if USE_OBSTACK
   obstack_free (&new_table->entry_stack, NULL);
@@ -869,69 +964,46 @@ hash_rehash (Hash_table *table, size_t candidate)
 #endif
   new_table->free_entry_list = table->free_entry_list;

-  for (bucket = table->bucket; bucket < table->bucket_limit; bucket++)
-    if (bucket->data)
-      for (cursor = bucket; cursor; cursor = next)
-       {
-         void *data = cursor->data;
-         struct hash_entry *new_bucket
-           = (new_table->bucket
-              + new_table->hasher (data, new_table->n_buckets));
-
-         if (! (new_bucket < new_table->bucket_limit))
-           abort ();
-
-         next = cursor->next;
-
-         if (new_bucket->data)
-           {
-             if (cursor == bucket)
-               {
-                 /* Allocate or recycle an entry, when moving from a bucket
-                    header into a bucket overflow.  */
-                 struct hash_entry *new_entry = allocate_entry (new_table);
-
-                 if (new_entry == NULL)
-                   return false;
+  if (transfer_entries (table, new_table))
+    {
+      /* Entries transferred successfully; tie up the loose ends.  */
+      free (table->bucket);
+      table->bucket = new_table->bucket;
+      table->bucket_limit = new_table->bucket_limit;
+      table->n_buckets = new_table->n_buckets;
+      table->n_buckets_used = new_table->n_buckets_used;
+      table->free_entry_list = new_table->free_entry_list;
+      /* table->n_entries already holds its value.  */
+#if USE_OBSTACK
+      table->entry_stack = new_table->entry_stack;
+#endif
+      free (new_table);

-                 new_entry->data = data;
-                 new_entry->next = new_bucket->next;
-                 new_bucket->next = new_entry;
-               }
-             else
-               {
-                 /* Merely relink an existing entry, when moving from a
-                    bucket overflow into a bucket overflow.  */
-                 cursor->next = new_bucket->next;
-                 new_bucket->next = cursor;
-               }
-           }
-         else
-           {
-             /* Free an existing entry, when moving from a bucket
-                overflow into a bucket header.  Also take care of the
-                simple case of moving from a bucket header into a bucket
-                header.  */
-             new_bucket->data = data;
-             new_table->n_buckets_used++;
-             if (cursor != bucket)
-               free_entry (new_table, cursor);
-           }
-       }
+      return true;
+    }

-  free (table->bucket);
-  table->bucket = new_table->bucket;
-  table->bucket_limit = new_table->bucket_limit;
-  table->n_buckets = new_table->n_buckets;
-  table->n_buckets_used = new_table->n_buckets_used;
+  /* We've allocated new_table, and possibly moved some entries, but
+     could not move the remaining entries.  We must undo the partial
+     move before returning failure.  The only way to get into this
+     situation is if new_table uses fewer buckets than the old table,
+     so we will reclaim some free entries as overflows in the new
+     table are put back into distinct buckets in the old table.
+
+     FIXME - The restoration pass visits elements in a different order
+     than the original transfer pass.  Rewriting transfer_entries to
+     perform two passes (the first pass moves only overflow entries,
+     and the second pass moves bucket heads) is provably safe, but it
+     is inefficient in cache usage.  It would be nice to have a proof
+     that intermixing allocations and frees, as in the current
+     implementation of transfer_entries will never have worse memory
+     pressure than what we started with.  */
   table->free_entry_list = new_table->free_entry_list;
+  if (! transfer_entries (new_table, table))
+    abort ();
   /* table->n_entries already holds its value.  */
-#if USE_OBSTACK
-  table->entry_stack = new_table->entry_stack;
-#endif
+  free (new_table->bucket);
   free (new_table);
-
-  return true;
+  return false;
 }

 /* If ENTRY matches an entry already in the hash table, return the pointer
@@ -1053,7 +1125,25 @@ hash_delete (Hash_table *table, const void *entry)
                 : (table->n_buckets * tuning->shrink_factor
                    * tuning->growth_threshold));

-             hash_rehash (table, candidate);
+             if (!hash_rehash (table, candidate))
+               {
+                 /* Failure to allocate memory in an attempt to
+                    shrink the table is not fatal.  But since memory
+                    is low, we can at least be kind and free any
+                    spare entries, rather than keeping them tied up
+                    in the free entry list.  */
+#if ! USE_OBSTACK
+                 struct hash_entry *cursor = table->free_entry_list;
+                 struct hash_entry *next;
+                 while (cursor)
+                   {
+                     next = cursor->next;
+                     free (cursor);
+                     cursor = next;
+                   }
+                 table->free_entry_list = NULL;
+#endif
+               }
            }
        }
     }
@@ -1077,7 +1167,7 @@ hash_print (const Hash_table *table)
       if (bucket)
        printf ("%lu:\n", (unsigned long int) (bucket - table->bucket));

-      for (cursor = bucket; cursor; cursor = cursor->next)
+      for (cursor = (struct hash_entry *) bucket; cursor; cursor = cursor-
>next)
        {
          char const *s = cursor->data;
          /* FIXME */
-- 
1.6.3.2





And here's the diff I was using to simulate memory failures:

diff --git i/lib/hash.c w/lib/hash.c
index 6443415..54cafb1 100644
--- i/lib/hash.c
+++ w/lib/hash.c
@@ -727,6 +727,11 @@ hash_free (Hash_table *table)

 /* Insertion and deletion.  */

+#if TESTING
+/* Change this variable in a debugger to simulate memory pressure.  */
+static bool memory_full;
+#endif
+
 /* Get a new hash entry for a bucket overflow, possibly by recycling a
    previously freed one.  If this is not possible, allocate a new one.  */

@@ -742,6 +747,10 @@ allocate_entry (Hash_table *table)
     }
   else
     {
+#if TESTING
+      if (memory_full)
+       return NULL;
+#endif
 #if USE_OBSTACK
       new = obstack_alloc (&table->entry_stack, sizeof *new);
 #else
@@ -943,6 +952,11 @@ hash_rehash (Hash_table *table, size_t candidate)
       return true;
     }

+#if TESTING
+  /* Fun with memory allocation!  */
+  memory_full = true;
+#endif
+
   /* In order for the transfer to successfully complete, we need
      additional overflow entries when distinct buckets in the old
      table collide into a common bucket in the new table.  The worst
@@ -979,6 +993,10 @@ hash_rehash (Hash_table *table, size_t candidate)
 #endif
       free (new_table);

+#if TESTING
+  /* Fun with memory allocation!  */
+  memory_full = false;
+#endif
       return true;
     }

@@ -1003,6 +1021,10 @@ hash_rehash (Hash_table *table, size_t candidate)
   /* table->n_entries already holds its value.  */
   free (new_table->bucket);
   free (new_table);
+#if TESTING
+  /* Fun with memory allocation!  */
+  memory_full = false;
+#endif
   return false;
 }

diff --git i/tests/test-hash.c w/tests/test-hash.c
index 6bb9652..e4143e7 100644
--- i/tests/test-hash.c
+++ w/tests/test-hash.c
@@ -173,7 +173,9 @@ main (void)
        case 6:
          {
            size_t n = hash_get_n_entries (ht);
-           ASSERT (hash_rehash (ht, n + rand () % 20));
+            if (i == 363)
+               i = i * (i - 362);
+            if (hash_rehash (ht, n + rand () % 20));
          }
          break;

@@ -182,7 +184,7 @@ main (void)
            size_t n = hash_get_n_entries (ht);
            size_t delta = rand () % 20;
            if (delta < n)
-             ASSERT (hash_rehash (ht, n - delta));
+              if (hash_rehash (ht, n - delta));
          }
          break;



With my memory cap in place, the initial transfer of iteration 363 fails, and 
at the start of the attempted recovery transfer, I see the following (at least, 
for my system's implementation of rand()):

(gdb) p *table
$1 = {bucket = 0x6b7940, bucket_limit = 0x6b7f38, n_buckets = 191, 
  n_buckets_used = 26, n_entries = 157, tuning = 0x405098, 
  hasher = 0x4037fc <hash_pjw>, comparator = 0x401040 <hash_compare_strings>, 
  data_freer = 0x4010b2 <hash_freer>, free_entry_list = 0x6b58c0}
(gdb) p *new_table
$2 = {bucket = 0x6b8750, bucket_limit = 0x6b8de8, n_buckets = 211, 
  n_buckets_used = 94, n_entries = 0, tuning = 0x405098, 
  hasher = 0x4037fc <hash_pjw>, comparator = 0x401040 <hash_compare_strings>, 
  data_freer = 0x4010b2 <hash_freer>, free_entry_list = 0x0}
(gdb) p hash_print(table)
0:
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
43:
44:
45:
46:
47:
48:
49:
50:
51:
52:
53:
54:
55:
56:
57:
58:
59:
60:
61:
62:
63:
64:
65:
66:
67:
68:
69:
70:
71:
72:
73:
74:
75:
76:
77:
78:
79:
80:
81:
82:
83:
84:
85:
86:
87:
88:
89:
90:
91:
92:
93:
94:
95:
96:
97:
98:
99:
100:
101:
102:
103:
104:
105:
106:
107:
108:
109:
110:
111:
112:
113:
114:
115:
116:
117:
118:
119:
120:
121:
122:
123:
124:
125:
126:
127:
128:
129:
130:
131:
132:
133:
134:
135:
136:
137:
138:
139:
140:
141:
142:
143:
144:
145:
146:
147:
148:
149:
150:
151:
152:
153:
154:
155:
  278
156:
157:
158:
159:
160:
161:
162:
  320
163:
  321
164:
  322
165:
166:
  324
167:
168:
  326
169:
  327
170:
  121
  350
171:
  122
  351
  329
172:
  352
173:
  353
174:
  354
175:
  355
176:
  356
177:
  150
178:
  358
179:
  359
180:
  153
181:
  154
182:
183:
  156
184:
  30
185:
  158
  31
186:
  159
187:
  182
188:
  183
  34
189:
190:
  36
$3 = void
(gdb) p hash_print(new_table)
0:
  223
1:
  297
  224
2:
3:
  37
  226
4:
5:
  228
6:
  131
7:
  132
8:
  133
9:
10:
  135
11:
12:
13:
  138
14:
15:
16:
17:
  360
18:
  361
19:
  362
20:
21:
22:
23:
24:
25:
  270
26:
  82
  271
27:
  272
28:
  200
  273
29:
  12
  274
30:
  275
31:
  14
  276
32:
  277
33:
  205
  16
34:
  206
35:
36:
  208
37:
38:
39:
  113
40:
  114
41:
  188
  115
42:
  116
43:
44:
45:
46:
47:
48:
  340
49:
  341
50:
51:
52:
  344
53:
54:
  346
55:
56:
  348
57:
  62
  251
58:
  252
59:
  253
60:
  65
  254
61:
  66
  255
62:
  256
63:
64:
65:
  161
66:
67:
68:
  164
69:
70:
  166
71:
72:
  168
73:
74:
75:
76:
77:
78:
79:
80:
81:
82:
83:
84:
85:
86:
  40
87:
  230
88:
  231
89:
90:
91:
92:
93:
  236
  47
94:
95:
96:
97:
  142
98:
  143
99:
  144
100:
101:
  146
102:
103:
104:
105:
106:
107:
108:
109:
110:
111:
112:
  302
113:
114:
  304
115:
  91
116:
  92
  281
117:
  93
  307
  282
118:
  94
  210
  308
  283
119:
  95
  211
120:
  23
  212
121:
122:
  25
123:
124:
  289
125:
126:
  193
  218
127:
  194
128:
  195
129:
130:
  197
131:
132:
  199
133:
134:
135:
136:
137:
138:
139:
140:
141:
142:
143:
144:
145:
146:
  260
147:
148:
149:
150:
151:
  265
  76
152:
  266
  77
153:
  267
  78
154:
  79
155:
  171
156:
  172
157:
158:
  174
159:
  102
160:
161:
  104
162:
  105
163:
164:
  107
165:
166:
167:
168:
169:
170:
171:
172:
  333
173:
  334
174:
175:
176:
  50
  337
177:
  51
178:
  52
  241
  339
179:
180:
181:
  55
182:
  245
183:
184:
  58
185:
  248
186:
  249
187:
188:
189:
190:
191:
192:
193:
194:
195:
196:
197:
198:
199:
200:
201:
  311
202:
203:
  313
204:
205:
206:
  316
207:
  292
  317
208:
  293
  220
  318
209:
  319
  221
210:
  295
  222
$4 = void
(gdb) 

And the one-pass recovery attempt fails at:

(gdb) p *table
$6 = {bucket = 0x6b7940, bucket_limit = 0x6b7f38, n_buckets = 191, 
  n_buckets_used = 37, n_entries = 157, tuning = 0x405098, 
  hasher = 0x4037fc <hash_pjw>, comparator = 0x401040 <hash_compare_strings>, 
  data_freer = 0x4010b2 <hash_freer>, free_entry_list = 0x0}
(gdb) p *new_table
$5 = {bucket = 0x6b8750, bucket_limit = 0x6b8de8, n_buckets = 211, 
  n_buckets_used = 83, n_entries = 0, tuning = 0x405098, 
  hasher = 0x4037fc <hash_pjw>, comparator = 0x401040 <hash_compare_strings>, 
  data_freer = 0x4010b2 <hash_freer>, free_entry_list = 0x0}

(gdb) p hash_print(table)
0:
  37
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
  297
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
43:
44:
45:
46:
47:
48:
49:
50:
51:
52:
53:
54:
55:
56:
57:
58:
59:
60:
61:
62:
63:
64:
65:
66:
67:
68:
69:
70:
71:
72:
73:
  223
74:
  224
75:
76:
  226
77:
78:
  228
79:
80:
81:
82:
83:
84:
85:
86:
87:
88:
89:
90:
91:
92:
93:
94:
95:
96:
97:
98:
99:
100:
101:
102:
103:
104:
105:
106:
107:
108:
109:
  131
  360
110:
  132
  361
111:
  133
112:
113:
  135
114:
115:
116:
  138
117:
118:
119:
120:
121:
122:
123:
124:
125:
126:
127:
128:
129:
130:
131:
132:
133:
134:
135:
136:
137:
138:
139:
140:
141:
142:
143:
144:
145:
146:
147:
148:
149:
150:
151:
152:
153:
154:
155:
  278
156:
157:
158:
159:
160:
161:
162:
  320
163:
  321
164:
  322
165:
166:
  324
167:
168:
  326
169:
  327
170:
  121
  350
171:
  122
  351
  329
172:
  352
173:
  353
174:
  354
175:
  355
176:
  356
177:
  150
178:
  358
179:
  359
180:
  153
181:
  154
182:
183:
  156
184:
  30
185:
  158
  31
186:
  159
187:
  182
188:
  183
  34
189:
190:
  36
$7 = void
(gdb) p hash_print(new_table)
0:
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
  362
20:
21:
22:
23:
24:
25:
  270
26:
  82
  271
27:
  272
28:
  200
  273
29:
  12
  274
30:
  275
31:
  14
  276
32:
  277
33:
  205
  16
34:
  206
35:
36:
  208
37:
38:
39:
  113
40:
  114
41:
  188
  115
42:
  116
43:
44:
45:
46:
47:
48:
  340
49:
  341
50:
51:
52:
  344
53:
54:
  346
55:
56:
  348
57:
  62
  251
58:
  252
59:
  253
60:
  65
  254
61:
  66
  255
62:
  256
63:
64:
65:
  161
66:
67:
68:
  164
69:
70:
  166
71:
72:
  168
73:
74:
75:
76:
77:
78:
79:
80:
81:
82:
83:
84:
85:
86:
  40
87:
  230
88:
  231
89:
90:
91:
92:
93:
  236
  47
94:
95:
96:
97:
  142
98:
  143
99:
  144
100:
101:
  146
102:
103:
104:
105:
106:
107:
108:
109:
110:
111:
112:
  302
113:
114:
  304
115:
  91
116:
  92
  281
117:
  93
  307
  282
118:
  94
  210
  308
  283
119:
  95
  211
120:
  23
  212
121:
122:
  25
123:
124:
  289
125:
126:
  193
  218
127:
  194
128:
  195
129:
130:
  197
131:
132:
  199
133:
134:
135:
136:
137:
138:
139:
140:
141:
142:
143:
144:
145:
146:
  260
147:
148:
149:
150:
151:
  265
  76
152:
  266
  77
153:
  267
  78
154:
  79
155:
  171
156:
  172
157:
158:
  174
159:
  102
160:
161:
  104
162:
  105
163:
164:
  107
165:
166:
167:
168:
169:
170:
171:
172:
  333
173:
  334
174:
175:
176:
  50
  337
177:
  51
178:
  52
  241
  339
179:
180:
181:
  55
182:
  245
183:
184:
  58
185:
  248
186:
  249
187:
188:
189:
190:
191:
192:
193:
194:
195:
196:
197:
198:
199:
200:
201:
  311
202:
203:
  313
204:
205:
206:
  316
207:
  292
  317
208:
  293
  220
  318
209:
  319
  221
210:
  295
  222
$8 = void
(gdb) 



Fortunately, it should be pretty easy to rewrite transfer_entries into a two-
pass algorithm; the question is whether we want to do that, or whether we want 
to risk the additional memory pressure, or whether we want a one-pass initial 
transfer but a two-pass recovery.  I guess I'll start playing with my options, 
maybe even by passing a bool parameter to transfer_entries, and see if my claim 
that a two-pass recovery will always work is still valid.

-- 
Eric Blake







reply via email to

[Prev in Thread] Current Thread [Next in Thread]