|
From: | Noilson Caio |
Subject: | Re: Bash monopolizing or eating the RAM MEMORY |
Date: | Mon, 20 Mar 2017 15:54:37 -0300 |
On Mon, Mar 20, 2017 at 12:17:39PM -0300, Noilson Caio wrote:
> 1 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/ - ( 5 levels ) No 10 to the 5th power (100,000) strings generated. Sloppy, but viable on
> problems
today's computers. You're relying on your operating system to allow
an extraordinary large set of arguments to processes. I'm guessing
Linux.
> 2 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/ - (6 levels ) You have two problems. The first is that you are generating 10^6
> We have a problem - "Argument list too long".
(1 million) strings in memory, all at once. The second is that you are
attempting to pass all of these strings as arguments to a single mkdir
process. Apparently even your system won't permit that.
> 3 - Using mkdir -p {0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/{0..9}/ - (7 10 million strings, all at once. Each one is ~15 bytes (counting the NUL
> levels ) - Ops, we don't have more "Argument list too long" now we have "Cannot
> allocate memory".
and slashes), so you're looking at something like 150 megabytes.
This is not a bash bug. It's a problem with your approach. You wouldn't
call it a bug in C, if you wrote a C program that tried to allocate 150
megabytes of variables and got an "out of memory" as a result. The same
applies to any other programming language.
What you need to do is actually think about how big a chunk of memory
(and argument list) you can handle in a single call to mkdir -p, and
just do that many at once. Call mkdir multiple times, in order to get
the full task done. Don't assume bash will handle that for you.
[Prev in Thread] | Current Thread | [Next in Thread] |