bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: need explanation ulimit -c for limiting core dumps


From: Mike Stroyan
Subject: Re: need explanation ulimit -c for limiting core dumps
Date: Fri, 20 Oct 2006 10:29:31 -0600

I'm trying to limit the size of coredumps using 'ulimit -c'.  Can someone 
please explain why a core file gets generated from the coretest program (source 
is below)?

Thanks for any help or suggestions.

Shell interaction
% ulimit -H -c
unlimited
% ulimit -S -c
0
% bash --version
GNU bash, version 2.05b.0(1)-release (i386-pc-linux-gnu)
Copyright (C) 2002 Free Software Foundation, Inc.
% ulimit -c 512
% ulimit -S -c
512
% ulimit -H -c
512
% ./coretest 2048
rlim_cur,rlim_max = 524288,524288
malloced 2097152 bytes my pid is 21255
Segmentation fault (core dumped)
% ls -l core
-rw-------  1 jacr swdvt 2265088 2006-10-19 14:24 core

Jason,

 This clearly is not a bash bug as your own program shows that getrlimit
reports the correct setting of 512K for RLIMIT_CORE.

 This is a kernel surprise.  The RLIMIT_CORE setting does not actually limit
the size of a core file as reported by "ls -l".  It limits the size of
the core file
on disk as reported by "du --si core".  Your coretest program used a large
malloc and did not actually touch the pages for that malloc.  So the core
dump created a sparse file with gaps at the pages that were never touched.
If you change "c=malloc(sz);" to "c=calloc(sz,1);" then you will see a core file
that is not sparse at all.  It will be reported as 512K bytes by both ls and du.
The RLIMIT_CORE effect for non-zero limits is to truncate large core
files rather
than prevent a core dump from happening.

 Sparse core files can cause trouble for the unwary.  They may become
non-sparse when copied.  That takes up more disk space.

--
Mike Stroyan
stroyan@gmail.com




reply via email to

[Prev in Thread] Current Thread [Next in Thread]