bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#30626: 26.0.91; Crash when traversing a `stream-of-directory-files'


From: Daniel Colascione
Subject: bug#30626: 26.0.91; Crash when traversing a `stream-of-directory-files'
Date: Thu, 1 Mar 2018 02:44:54 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0

On 02/27/2018 10:08 AM, Eli Zaretskii wrote:
From: Michael Heerdegen <michael_heerdegen@web.de>
Cc: bug-gnu-emacs@gnu.org,  30626@debbugs.gnu.org
Date: Tue, 27 Feb 2018 13:08:59 +0100

#+begin_src emacs-lisp
(seq-doseq (_ (stream-range 1 1000000)) nil)
#+end_src

Note that this is executed as a loop due how to streams are implemented,
although the definition of `seq-doseq' looks recursive.  But it seems
that gc has a problem with the large number of conses created when
processing that.

What can we do instead in such cases?  Stack-overflow protection
cannot work in GC, so you are shooting yourself in the foot by
creating such large recursive structures.  By the time we get to GC,
where the problem will happen, it's too late, because the memory was
already allocated.

Does anyone has a reasonable idea for avoiding the crash in such
programs?

We need to fix GC being deeply recursive once and for all. Tweaking stack sizes on various platforms and trying to spot-fix GC for the occasional deeply recursive structure is annoying. Here's my proposal:

Turn garbage_collect_1 into a queue-draining loop, initializing the object queue with the GC roots before draining it. We'll make mark_object put an object on this queue, turning the existing mark_object code into a mark_queued_object function.

garbage_collect_1 will just call mark_queued_object in a loop; mark_queued_object can call mark_object, but since mark_object just enqueues an object and doesn't recurse, we can't exhaust the stack with deep object graphs. (We'll repurpose the mark bit to mean that the object is on the to-mark queue; by the time we fully drain the queue, just before we sweep, the mark bit will have the same meaning it does now.)

We can't allocate memory to hold the queue during GC, so we'll have to pre-allocate it. We can implement the queue as a list of queue blocks, where each queue block is an array of 16k or so Lisp_Objects. During allocation, we'll just make sure we have one Lisp_Object queue-block slot for every non-self-representing Lisp object we allocate.

Since we know that we'll have enough queue blocks for the worst GC case, we can have mark_object pull queue blocks from a free list, aborting if for some reason it ever runs out of queue blocks. (The previous paragraph guarantees we won't.) garbage_collect_1 will churn through these heap blocks and place each back on the free list after it's called mark_queued_object on every Lisp_Object in the queue block.

In this way, in non-pathological cases of GC, we'll end up using the same few queue blocks over and over. That's a nice optimization, because we can MADV_DONTNEED unused queue blocks so the OS doesn't actually have to remember their contents.

In this way, I think we can make the current GC model recursion-proof without drastically changing how we allocate Lisp objects. The additional memory requirements should be modest: it's basically one Lisp_Object per Lisp object allocated.

The naive version of this scheme needs about 4.6MB of overhead on my current 20MB Emacs heap, but it should be possible to reduce the overhead significantly by taking advantage of the block allocation we do for conses and other types --- we can put whole blocks on the queue instead of pointers to individual block parts, so we can get away with a much smaller queue. Under this approach, the reserved-queue-block scheme would impose an overhead of somewhere around 1MB on the same heap. This amount of overhead seems reasonable. We may end up actually using less memory that we would for recursive mark_object stack invocation.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]