bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Unreclaimed swap space upon process termination?


From: Thomas Schwinge
Subject: Re: Unreclaimed swap space upon process termination?
Date: Mon, 28 Nov 2016 17:10:26 +0100
User-agent: Notmuch/0.9-125-g4686d11 (http://notmuchmail.org) Emacs/24.5.1 (x86_64-pc-linux-gnu)

Hi!

On Mon, 28 Nov 2016 16:03:44 +0100, I wrote:
> Updating a Debian GNU/Hurd virtual machine to recent packages after many
> months, and then running the GCC testsuite, I observe the following
> behavior, which should be reproducible with the executable in the
> attached tarball:
> 
>     $ vmstat | grep swap\ free
>     swap free:         4096M
>     $ ./1.exe 
>     $ vmstat | grep swap\ free
>     swap free:         3288M
>     $ ./1.exe 
>     $ vmstat | grep swap\ free
>     swap free:         2495M
>     $ ./1.exe 
>     $ vmstat | grep swap\ free
>     swap free:         1726M
>     $ ./1.exe 
>     $ vmstat | grep swap\ free
>     swap free:          931M
>     $ ./1.exe 
>     $ vmstat | grep swap\ free
>     swap free:          164M
>     $ ./1.exe 
>     Bus error
>     $ vmstat | grep swap\ free
>     swap free:            0 
> 
> At this point, the system doesn't recover from this low memory situation.
> 
> For each invocation of the executable, there are three "no more room in
> [...]  (./1.exe([...])" messages on the Mach console.
> 
> The executable is compiled from
> [gcc]/libstdc++-v3/testsuite/21_strings/basic_string/modifiers/insert/char/1.cc
> from commit a050099a416f013bda35832b878d9a57b0cbb231 (gcc-6-branch branch
> point; 2016-04-15), which doesn't look very spectacular -- apart from
> maybe the __gnu_test::set_memory_limits call, which I'll try to figure
> out what it does.

That uses setrlimit for RLIMIT_DATA, RLIMIT_RSS, RLIMIT_VMEM, RLIMIT_AS,
but there is no change with that call removed.

> But nevertheless, unreclaimed swap space upon process
> termination sounds like a bug?
> 
> Unless this a known issue, or somebody can quickly pinpoint the problem,
> I'll try to bisect core system packages, between the version of the
> "good" and "bad" disk images.

Running with rpctrace, I see that on the old ("good") system, at the end
of the process, we got:

    [...]
    task52(pid2198)->vm_deallocate (16973824 16) = 0 
    task52(pid2198)->vm_allocate (0 -2147479552 1) = 0x3 ((os/kern) no space 
available) 
    task52(pid2198)->vm_allocate (0 -2147348480 1) = 0x3 ((os/kern) no space 
available) 
    task52(pid2198)->vm_map (0 2097152 0 1  (null) 0 1 0 7 1) = 0 21405696
    task52(pid2198)->vm_deallocate (21405696 614400) = 0 
    task52(pid2198)->vm_deallocate (23068672 434176) = 0 
    task52(pid2198)->vm_protect (22020096 135168 0 3) = 0 
    task52(pid2198)->vm_allocate (0 -2147479552 1) = 0x3 ((os/kern) no space 
available) 
      61<--68(pid2198)->proc_mark_exit_request (0 0) = 0 
    task52(pid2198)->task_terminate () = 0 
    Child 2198 exited with 0

..., but on the new ("bad") system, the first non-sensical (huge;
-2147479552 is 0x80001000) vm_allocate call actually succeeds:

    [...]
    task154(pid1080)->vm_deallocate (16973824 16) = 0 
    task154(pid1080)->vm_allocate (0 -2147479552 1) = 0 268742656
    task154(pid1080)->vm_allocate (0 -2147479552 1) = 0x3 ((os/kern) no space 
available) 
    task154(pid1080)->vm_allocate (0 -2147348480 1) = 0x3 ((os/kern) no space 
available) 
    task154(pid1080)->vm_map (0 2097152 0 1  (null) 0 1 0 7 1) = 0 21655552
    task154(pid1080)->vm_deallocate (21655552 364544) = 0 
    task154(pid1080)->vm_deallocate (23068672 684032) = 0 
    task154(pid1080)->vm_protect (22020096 135168 0 3) = 0 
    task154(pid1080)->vm_allocate (0 -2147479552 1) = 0x3 ((os/kern) no space 
available) 
    task154(pid1080)->vm_deallocate (268742656 -2147479552) = 0 
      163<--170(pid1080)->proc_mark_exit_request (0 0) = 0 
    task154(pid1080)->task_terminate () = 0 
    Child 1080 exited with 0

I have not yet figured out where these vm_allocate calls and/or their
huge size parameters are coming from.


Grüße
 Thomas



reply via email to

[Prev in Thread] Current Thread [Next in Thread]