libtool
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: fork costs...


From: Ralf Wildenhues
Subject: Re: fork costs...
Date: Sun, 22 Oct 2006 12:01:12 +0200
User-agent: Mutt/1.5.13 (2006-08-11)

* Kyle Sallee wrote on Sat, Oct 21, 2006 at 08:31:05PM CEST:
> On 10/21/06, Ralf Wildenhues <address@hidden> wrote:
> 
> >> > I understand that you are strongly against using forks
> >> > where the equivilent can be coded without it.
> >
> >Not so.  Forking in order to invoke faster tools or to reduce the
> >complexity order can speed up things significantly, as you also pointed
> >out.  We are however against any deliberate changes that do not provably
> >improve things.
> 
> That is where it is difficult.
> 3 forks might be a few miliseconds slower than
> doing it with bash when the loop iteration is tiny.
> But when the loop iterration is vast
> then the 3 forks will be faster.

We do not have a problem at all with a small regression for small links,
if they help with a big improvement for large links.  Complexity
reductions are worth far more than regressions that have a small
constant factor.

With "prove", I meant that for those changes to be acceptable, it helps
to show actual example links *where this is obvious* that things have
improved at all, and that this improvement is necessary.  Show timings.

> >> > Here is my conjecture...
> >
> >We need code, not conjectures.  Be expected to have it ripped apart or
> >rejected for portability issues, and to be demanded test cases for any
> >changes that aren't obviously not changing semantics.  But really it
> >is not possible to judge optimization issues based on talk.
> 
> In this case an hour of discussion may be worth more than 10 of coding.
> The code would look slow and concise, but execute fast.
> I would not want anyone, not even myself,
> to put forth the effort to write the code if there is no chance in
> having it accepted.

Here I think we have to agree to disagree.  You can write all you like,
and we can agree all we like.  Construeing that as an OK for the next,
as yet unwritten patch, is however an illusion.  To me, "show me the
code" is a general theme in open and free software development.

That being said, I think I already wrote twice that provable
improvements are welcome.

> The speed gains for converting some of the complex shell code
> to piped coreutil commands is so obvious when working with
> large variables or lists that I can only expect that there is a
> strong opposition to it that has prevented it from being done already.

There may also be another couple of reasons: first, developers simply
haven't seen the behavior you experience before (because their links
weren't so long); they simply haven't gotten around to doing it yet.

For example, the long link time with many objects (libgcj) was more or
less the first time this issue had been reported to me.  I simply wasn't
aware of the fact that the quadratic scaling in the number of objects
was a practical issue.

And since now I was not aware that quadratic scaling in the list of
deplibs was an issue of practical concern.  If it is, then let's do
something against it.

> For example the shell code for identifying match words in a string
> by using a case statement is clever, but slow, slow compared to
> using sort | uniq -d when doing a signifigant number of comparisons.

You've realized though that it's not possible to reorder many of the
lists libtool uses?

> Obviously, the consensus until now is to keep using the shell code.

No, this is a misunderstanding.

Please, let's keep communication efficient by not repeating the same
questions over and over again.

Cheers,
Ralf




reply via email to

[Prev in Thread] Current Thread [Next in Thread]