libtool
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Fix for Arg list too long


From: Boehne, Robert
Subject: RE: Fix for Arg list too long
Date: Thu, 1 Feb 2001 17:00:08 -0500

 
DUH, I can just check each command to see if it is too long. <*blush*>
I have a prototype using this method, we'll see how it works on
the systems I have here.  I do have one question though,
If 'expr' is used to check if the arguments list is too long, do we
have to worry about 'expr' being a shell built-in?
 
I tested this on OSF/1 4f, IRIX 6.5, HP 10.20 & 11.0, Linux RedHat 6.2,
Solaris/SunOS 5.6 & 5.7, and AIX 4.3.3, and it worked on all of them.
 
I've attached a simple bourne shell script that prints out the maximum
command line length (approximately) using the method discussed here.
 
Robert
-----Original Message-----
From: Boehne, Robert
Sent: Wednesday, January 31, 2001 5:45 PM
To: 'Alexandre Oliva'
Cc: 'address@hidden'
Subject: RE: Fix for Arg list too long



-----Original Message-----
From: Alexandre Oliva [mailto:address@hidden]
Sent: Wednesday, January 31, 2001 1:27 PM
To: Boehne, Robert
Cc: 'address@hidden'
Subject: Re: Fix for Arg list too long


On Jan 31, 2001, "Boehne, Robert" <address@hidden> wrote:

> I have not had this problem with 'wc -c' but I did have it with `expr $cmd :
> ".*"`

You'd also have the problem with `$echo $cmd | wc -c' if echo is not a
shell built-in, so you must take this possibility into account.

> SGI was one system that would not allow me to use $reload_cmds because
> $LD was set to CC.

This is wrong.  LD should be ld, not cc.  That was a kludge we'd used
in the beginning of ltcf-cxx.sh to reduce the divergence from
ltcf-c.sh, but it no longer makes sense.  We should go ahead and use
CC wherever we use LD, and keep LD with a sane value, which is the
name of the linker, not of the compiler.

I definitely agree, I was suprised to see that $reload_cmds did not
work unless it was hacked to use "ld".  I also noticed that the
_expression_ at the top of ltcf-cxx.sh that sets $LD to MakeC++SharedLib
under AIX has no effect (I guess I should fix that...).

> I'm not sure I understand this, you're saying that some archivers can't
> add several object files to an existing archive properly?

They can.  The problem is that -r is not reversible: as soon as you
merge multiple object files into a single one, the linker will pull
them all from the archive, instead of pulling only the ones that are
actually needed.  That's why I'd rather have object files added
incrementally to static libraries, many at a time, but not by means of
reloading.

For shared libraries, this doesn't matter, since it's always linked as
a whole.

> #1) $old_archive_cmds will allow users to add objects to an existing
>     archive.  I'm pretty sure this isn't true, but is there a way
>     to add objects to an archive for most supported archivers?

This is true in general.  There will probably be exceptions (Windows,
for one), in which case we won't be able to do incremental adding of
objects, and reloading will be needed.  We'd set some new variable to
tell libtool whether incremental archiving is possible or not.

For archive libraries, I'm not creating reloadable object files,
I'm just executing $old_archive_cmds in several steps, so pulling
individual object files back out won't be a problem.

After some checking I find that `expr "X$arg" : ".*"` fails
when the command list is too long, much like a compiler does.
So if we test for expr failing, that will tell us if the command
line is too long.  The only problem is that using this method
we can't calculate beforehand how many steps we need to link in,
we would have to break the object list in half, try that, then
break that in half it if fails and so on.  How can we do this
without recursive functions?


Attachment: arglencheck
Description: Binary data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]