bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] Add-rm-rf-as-examples-loadables-rm.c


From: Chet Ramey
Subject: Re: [PATCH] Add-rm-rf-as-examples-loadables-rm.c
Date: Tue, 1 Nov 2016 12:42:09 -0400
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 11/1/16 11:41 AM, Tim Ruehsen wrote:

>>>   if ((dir = opendir(dirname)))
>>>   
>>>     {
>>>     
>>>       while ((dp = readdir(dir)))
>>
>> Ouch.  This version has O(n^2) complexity when dealing with deep
>> hierarchies.  GNU Coreutils intentionally prefers using
>> openat()/fdopendir() and so on in order to recurse through the tree with
>> O(n) complexity instead of O(n^2).
> 
> As Chet stated, the purpose was to speed up ./configure runs. For that I 
> can't 
> see very deeply nested directories with million of files in it.

The bottom line is to identify the problem you're trying to solve, figure
out the requirements and constraints of that problem, and solve it
efficiently.  In this case, the problem is speeding up configure, the
requirements are only those configure imposes, and the constraints are the
expected sets of files and directories to remove.  If the problem you're
solving doesn't include directory trees with millions of files in deeply
nested directories, don't waste time optimizing the case you're not going
to encounter.

> 
>>>         {
>>>         
>>>           if (*dp->d_name == '.' && (dp->d_name[1] == 0 || (dp->d_name[1]
>>>           == '.' && dp->d_name[2] == 0)))>           
>>>             continue;
>>>        
>>>        fnsize = dirlen + 1 + strlen(dp->d_name) + 1;
>>>        
>>>            fname = xmalloc (fnsize);
>>>            snprintf(fname, fnsize, "%s/%s", dirname, dp->d_name);
>>
>> And here is part of the HUGE speed penalty that you get in large
>> directories.  You are mallocing a LOT of memory, and waste a LOT of work
>> since the majority of the time (while within a single directory) the
>> prefix is unchanged from the previous iteration, while in the deep
>> hierarchy; while doing everything directory-relative completely avoids
>> the need to construct ever-deeper names.
> 
> I agree. My implementation used stack space - which should be fine for real 
> file 
> names. If in doubt, a simple heap reallocation strategy is just a few lines.
> But the purpose argument hits here as well.

I agree: if you're not going to use this implementation in a situation
where you have large directories with large numbers of files, don't worry
about that case.  Solve that problem if it becomes a problem.

The allocate-dynamic-arrays-on-the-stack trick only works for gcc and
compilers that masquerade as gcc.  If this turns out to be important, we
can wrap the dynamic array allocation in #if __GNUC__ and go from there.
Given the constraints of this problem, it may not be a big deal.

> 
>>>            if (rm_file (fname) && force == 0)
>>>            
>>>              err = 1;
>>>            
>>>            free (fname);
>>>         
>>>         }
>>
>> Another speed sink - your implementation removes names in readdir()
>> order; GNU Coreutils has proven benchmark numbers that unlink()ing files
>> that are sorted by inode number is measurably faster than unlink()ing in
>> readdir() order for directories with a large number of files.  Yes, it
>> is more complex to set up the tracking structures to sort things, but it
>> pays off in the long run.

The question is whether that long run can reasonably be expected to make a
difference with this issue and whether the effort and complexity is worth
it.  I suspect that it would take more execution time to set up the extra
data structure you need to do this than to simply remove the files in the
order readdir presents them.

>> Per POSIX, 'rm -R' and 'rm -r' are identical; a couple tweaks to this
>> patch will make support for 'rm -R' work out of the box.
> 
> Not needed for the purpose, but doesn't hurt.

Sure, that's easy enough.


>> You do NOT support the POSIX-mandated 'rm -i'.

Since configure scripts never call `rm -i', I don't see this as any kind
of problem.


> [*]
> Such a fallback on unknown options is a good thing. I simply didn't know how 
> to do that in an acceptable manner (within a loadable bash builtin). This 
> should be done for other loadables as well (e.g. 'cat'). It would serve it's 
> purpose as 'example' even better.
> 
>>
>>>     }
>>>   
>>>   list = loptend;
>>>   
>>>   if (list == 0)
>>>   
>>>     return (EXECUTION_SUCCESS);
>>
>> This does not match the behavior of GNU coreutils.  'rm' without
>> arguments prints a usage error and exits with nonzero status.  But note
>> that POSIX requires 'rm -f' without arguments to silently exit with
>> success.  POSIX only specifies the behavior of 'rm -f' without
>> arguments; with 'rm', you can do what you want (therefore you comply
>> with POSIX), but it's still nice to match what other implementations do.

Again, easy enough to do.

I totally understand the ideological purity argument Eric is making: we
only want the best solution that addresses every possible case optimally.
I'm usually cool with that.  But in this case, it seems like a little more
pragmatism is called for.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
                 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    chet@case.edu    http://cnswww.cns.cwru.edu/~chet/

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]