bug-coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

memory usage in cp -a


From: KAMOSAWA Masao
Subject: memory usage in cp -a
Date: Thu, 27 Nov 2003 17:39:52 +0900

Hi,

I occured to similar situation with Mr. Thomas Diesler in message
00876 on this list.

http://www.mail-archive.com/address@hidden/msg00876.html

I'm using a backup tool "pdumpfs", takes and maintains everyday
snapshots of target directories by copying it (updated or newly
created files) and making hard links (unchanged files).
(http://www.namazu.org/~satoru/pdumpfs/index.html.en)
I have enjoyed it and happily taken snapshots of whole tree of 
my Linux web/mail/db server everyday for this 7 months. 

Then, the filesystem of my backup drive has corrupted.

That is, when I tried to delete some largest files after 
backed them up to CD-R, the drive space still not freed, 
after I removed all of the links of those files. 
And fsck returned error code and could not repair the 
filesystem (ext3).

I decided to rescue "all" the filetree, but the tools for this
purpose are very limited. For tar, it could not handle hardlinks
correctly (as far as I read the man pages). For dump and restore,
they could dump and restore the corrupted filesystem faithfully.
Only the tool I could use was "cp -a", for handling capability
of hardlinks and it preserves all attributes.

The filesystem cotains about 200day's snapshots of my whole
system include mirrored indexed "web cache", so thousands 
 (How I could get the number?) of files (mostly hardlinks) 
and directory entries are in each day's dir.

My command was:

cp -a /backup/2003/* /backup2/2003

The box was stopped after about 3 hours, kernel was running 
(virtual terminals switchable) but I could not logged in, 
for a message continuously repeated that killing an process 
of httpd. I judged that the kernel could not start new processes
and it runnawayed (no Ctrl-Alt-Del valid).

After hard-reset and rebooted, the copy was done about 4 days 
of backups. Then I rerun the command and observed it on top(1).
It revealed that the process growed up and exceeded the memory 
space on the system (386MB RAM + 128MB swap).
I added swap files some time and when the process reach to
3GB, it left message "memory exhausted" and stopped.

I thought it could be a memory leakage or design flaw. 
I searched Google and come across the above messages.

I think this is worth to be reported, isn't it...?

KAMOSAWA Masao




reply via email to

[Prev in Thread] Current Thread [Next in Thread]