[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [minor] "precision" of $SECONDS

From: Linda Walsh
Subject: Re: [minor] "precision" of $SECONDS
Date: Thu, 25 Feb 2016 13:33:38 -0800
User-agent: Thunderbird

Stephane Chazelas wrote:
2016-02-25 03:03:41 -0800, Linda Walsh:

Stephane Chazelas wrote:
$ time bash -c 'while ((SECONDS < 1)); do :; done'
bash -c 'while ((SECONDS < 1)); do :; done'  0.39s user 0.00s system 99% cpu 
0.387 total
Sorry I took "cpu xxx total" to be the total cpu time. Silly me. (I do believe you, just the display format could be more clear).


TIMEFORMAT='%2Rsec %2Uusr %2Ssys (%P%% cpu)'

That would be for bash. In anycase, bash does already include
the elapsed time in its default time output like zsh.
but not as clearly, IMO... ;-)

But the problem here is not about the time keyword, but about the
$SECONDS variable.
   I realize that.
   With linux, one can read /proc/uptime to 100th's of a sec, or
use date to get more digits.  A middle of the road I used for
trace timing was something like:

function __age { declare ns=$(date +"%N"); declare -i
 printf "%4d.%03d\n" $SECONDS $ms

I'm not sure how that gives you the time since startup.
   The time since the bash script startup.
I was guessing that SECONDS recorded the integer value of 'seconds'
at the start of the script.  Thus it shows later times as the
later recorded time in seconds - the original time in seconds -- or
at least that would match current behavior -- initial "seconds"
param from gettime, (ignoring the nano or centi secs, depending on call).

going from an integer value of the time
at start
Currently, if bash is started at

   Well, ok, but time calls usually give back
seconds from 1970, with some giving back another param for
centi, or more modern calls giving back 2nd parm for nano's.

Theoretically, bash could never start when seconds=0, unless
it was started in 1970...  But I'm guessing you are using clock
time, whereas I was using the time from the start of the script.

I.e. @ start of script, SECONDS gets the # secs since 1970,
and (if done at the same time, the date call for nanosecs) would
be the #nanos above that number of secs.

After 0.4 seconds (at 00:00:01.1), $SECONDS will be 1 (the "bug"
I'm raising here). "ms" will be 100, so you'll print 1.100
instead of 0.600. And with my suggested fix, you'd print 0.100.
At bash startup, I'll see 0 seconds, and 7000,000 nanosecs.
after .4 secs, Ill see 1 sec & 1000,000 nanosecs.

So diff would be 1.1=0.7 = .4secs = correct answer, no?

Note that all of zsh, ksh93 and mksh have builtin support to get
elapsed time information with subsecond granularity.
That's not a POSIX requirement, and bash is hardly an ideal tool
to need or rely on sub-second granularity, especially since
it doesn't process signals in real-time, but only upon
a keypress in readline.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]