bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Resource limitation causing erractic behaviour?


From: William L. Maltby
Subject: Re: Resource limitation causing erractic behaviour?
Date: Fri, 8 Mar 2002 21:32:01 -0500 (EST)

Paul,

Apprecite the fast response. I tore my few remaining hairs trying to
progress on this. I'm already familiar with most things you mentioned.
My constraint was in attempting to follow the LFS book as closely as
possible. This required "cascading" back to the initiating shell,
BatchRun01 in each chapter's activities because of the need to exit
the chroot environ.

---------------------------------------------------------------------
Summary: Is it correct that I was exhausting resources, do you think?
Does the problem disappear on 256M/512M machines, which one of my beta
folks is testing on (although it is with the "worked-around" version)?

I first did a _quick_ perusal of the FAQ to see if there were any
guidlines that I might make use of. Didn't see any jump out at me there.
Do you have any "practical" ones that might warn me when I'm getting
too aggressive with my style for a particular machine config?

Is it some heretofore untested or undocumented practical limitation
affected by available mem/swap/environ (ala usize)?
----------------------------------------------------------------------

I did try the exec on the two /bin/bash --login -c operations at the
end of shells #2 and #3. It had some unexpected effects (well, I didn't
think them through carefully - day 3 syndrome had me in it's grips by
then). Now that I have "worked around" whatever was causing it, I have
more time and sanity and they may prove useful yet.

On Fri, 8 Mar 2002, Paul Jarc wrote:

> "William L. Maltby" <address@hidden> wrote:
> > Have a controlling script that near its end does
> > chroot `pwd` ... /bin/bash -c "another shell" that nearits end does a
> > /bin/bash --login -c "another shell" that near its end does another
> > /bin/bash --login -c "another shell".
> ...
> > Fix: In all but one case, change ". $WLD/source-shell" to
> > "$WLD/source-shell".
> 
> I assume you're using "." to avoid the cost of fork+exec.  If you use

Yes, exactly. Speed and effeciency are one of my great pleasures.

> "exec" instead of ".", you can still avoid the fork cost, and also
> avoid the accumulating memory consumption.  (With ".", each script is
> held in memory while the script it sources runs.  "exec" will discard
> the old script.  Of course, this is usable only if the "." command is
> immediately followed by (implicit or explicit) "exit".)  In general,

Exactly the situation on shell # 2 and # 3. So I tried there, but as I
said, I didn't give the side-effects the necessary thought they deserved.
Since I had discoverd that, apparently, I was just exhausting resources,
and had tested and found the workaround of converting from sourced to
invoked execution ( it served two purposes workaround and soothed my ego
:)  ) I decided to put further testing aside and pursue a truly (but
newbie) netizen activity and ask for help while I had hair left. My 1st :)

> you can save memory by "exec"ing the last command in a script, even if
> it's not another script, instead of forking for it.
> 
> Another option, if you script A sourcing script B sourcing script C,
> etc., is to use iteration instead of recursion:

Recursion? You mean like in C? I'm _not_ doing that I hope. The
"cascading" was purely by either source or invocation and each invoked
unit had _only_ structural similarities, All substantive details were
different. Diffent input files, different unique shells handled, etc.
Only for loops and , startup, loop, terminate structure was similar.

Umm... Something like this _appeared_ to be contributing to the problem at
one lower level. The shells 1 - 4 basically did a little source of
common routines, each does a couple of "exceptional" shells and then enter
a for loop along the lines of

    for N in `sed -e '/^#/d' /tmp/xx02` ; do
        # administrative stuff

        # Had to change bzcat to redirected stdin as a workaround.
        bzcat <../sources/x/thepackage_name|tar -xv
        cd thepackage_name

        # Had to change the following to non-sourced also.
        . $WLD/$pkg_name_base >$WLD/$pkg_name_base.log 2>&1
        # administrative overhead
    done

And then whatever the tail was.

Now, the second shell only processed glibc in the for loop. And that one
drove me crazy. Only one iteration in the for loop, only the second
level down shell and it would bomb in such a way that I began to susbect 
hardware until I came to myu senses and said 64MB, the biggest package I'm
working, other similar behaviour on slight smaller packages???
 
In that one shell, I replaced the loop with inline non-sourced and glibc
config, make, install began operating reliably. So, I got chicken
about trying anything else that looped of smack, er... smacked of
loop.

But the below is recursion, isn't it? Doesn't this also start making
deep stacks, huge heaps, etc unless I take great care to unset no-
longer-need variables? Ummm... got to think about that in light of what 
were actually doing here. Any nasty side-effects on a) chroot done by
shell # 1 to start shells # 2 by the chroot ... -c "..." and the
/bin/bash --login - c "..." at the end of shell 2 and 3? I realize this is
getting rid of the /bin/bash at end of shells 2 and 3, but that, unfortuna-
tely defeats the purpose. You see we have just compiled a new bash, glibc,
etc. and the LFS book wants to continue working with the new shell. Always
a fly in the ointment, dont'cha know.

> set - initial-script ...
> while [ "$#" != 0 ]; do
>   script="$1"
>   shift
>   . "$script"
> done
> Then each script can indicate that others should be sourced by
> 'set - "$@" other-script' or 'set - other-script "$@"', depending on
> whether you want to proceed depth-first or breadth-first.

I would have loved to do that one - tight, concise, efficient.
Hmm... maybe I'm unwittingly contributing here too. At the start
of each shell, the first thing is to load the _same_ common routines.
I execute a couple of those functions, same ones next. Each reads
a .cfg file that load appx 50 variables, but their short names (all 5 or
fewer characters) and all but one have less that 12 characters in their
values. But I can see where these stack up, ...

No? Does sourcing discard some kind of temporary environment also? Maybe
there's a spoot here for this. I'll have to think, depending on your
answer.

of which I was aware, but due to their size, I had considered them
insignificant.

Do you think I misjudged their impact?

> 
> 
> paul
> 

Bill Maltby
address@hidden,com





reply via email to

[Prev in Thread] Current Thread [Next in Thread]