bug-bash
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Problem with reading file and executing other stuffs?


From: Hugh Sasse
Subject: Re: Problem with reading file and executing other stuffs?
Date: Thu, 8 Nov 2007 17:38:58 +0000 (WET)

On Thu, 8 Nov 2007, Horinius wrote:

> Hugh Sasse wrote:
> > Again, what problem are you trying to solve, if any?  
> > 
> I'm doing some processing in a big file which is well formatted.  It's sort
> of a database table (or a CVS file if you like).  Every line contains a

OK, if it is in fields, like /etc/passwd, then awk is probably more
suited to this problem than reading it directly with shell script.

If it has some delimited keyword, but each line has variable structure,
then you'd be better using sed.

Both of these operate linewise on their input, and can use regular
expressions and actions in braces to produce some textual response.
You can pass that response to `xargs -n 1` or  something.

> unique element that determines what should be done.  Of course, I could go a
> grep on the file to find out the elements, but this would give a complexity
> of O(n^2).

Not sure how you get the O(n^2) from that unless you don't know what
the unique elements are, but I still make that "one pass to read them
all, one pass to execute them" [with apologies to Tolkien :-)]
> 
> I know that every line is processed only once, and the pointer to the
> current line will never go back.  So I figure out that I could read every
> line in an array element and I could process line by line.  This would give
> a O(n) which is much faster.

Yes, agreed.  Throw us a few example lines, fictionalised, then we may
be able to give you an example of an approach with greater simplicity.

        Hugh




reply via email to

[Prev in Thread] Current Thread [Next in Thread]