octave-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #36133] num2str displays more than 16 signific


From: Dan Sebald
Subject: [Octave-bug-tracker] [bug #36133] num2str displays more than 16 significant digits for large integer inputs
Date: Fri, 13 Jul 2018 03:05:21 -0400 (EDT)
User-agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0

Follow-up Comment #13, bug #36133 (project octave):

I noticed this *known bug* in the buildbot results.  It doesn't seem noted in
any of the discussion posts, so I thought I'd mention that the Octave result
for Rik's example:


>> num2str ([2.1, 1e23, pi])
ans = 2.1  9.999999999999999e+22      3.141592653589793


is the same as what A.J. has shown for Matlab.  That happens when Octave is
attempting to find a reasonable display width and the number magnitudes span a
very large range.

Otherwise, Octave is doing something different for narrow range, e.g.,:


>> num2str ([1e22 1e23 1e24])
ans = 10000000000000000000000    99999999999999991611392  
999999999999999983222784


So it would seem the same underlying exponent display mechanism is already
present, but not necessarily active by default.  And it isn't quite the same
as displaying the values:


>> [2.1, 1e23, pi]
ans =

   2.100000000000000e+00   9.999999999999999e+22   3.141592653589793e+00



However, this probably doesn't address the "shim" issue that Rik mentioned in
Comment #8.  I don't know what one can consider valid or invalid.  Using this
rule from Wikipedia:

"
If a decimal string with at most 15 significant digits is converted to IEEE
754 double-precision representation, and then converted back to a decimal
string with the same number of digits, the final result should match the
original string. If an IEEE 754 double-precision number is converted to a
decimal string with at least 17 significant digits, and then converted back to
double-precision representation, the final result must match the original
number.[1]
"

The Matlab rounding seems to stay within the rule.  (As does the Octave
non-rounding.)  It would seem that those last decimals outside resolution are
extraneous.  Perhaps what those decimals turn out to be is compiler dependent,
or CPU dependent, or library dependent.  The rounding of the 18th significant
digit might be a way to make all platforms have a consistent output for
num2str().

I don't see it as too big of an issue.  However, if Octave is left as is, then
this can't be the test:


ASSERT errors for:  assert (num2str (1e23),"100000000000000000000000")
  Location  |  Observed  |  Expected  |  Reason
     []      99999999999999991611392 100000000000000000000000   Strings don't
match


because the result will never be 100000000000000000000000.  And changing it
to


assert (num2str (1e23),"99999999999999991611392")


would likely fail on some systems.  Any test should really only focus on the
15-17 significant digits, whether that be the test itself (by somehow
extracting the first 17 characters) or by internally forcing the remaining
extraneous digits to something known (i.e., 0).

One last note.  Wikipedia mentions that for some C/C++ compilers there is a
possible issue of double rounding with x86 32-bit systems.

    _______________________________________________________

Reply to this item at:

  <http://savannah.gnu.org/bugs/?36133>

_______________________________________________
  Message sent via Savannah
  https://savannah.gnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]