octave-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #63940] `format native-bit` gives wrong result


From: Arun Giridhar
Subject: [Octave-bug-tracker] [bug #63940] `format native-bit` gives wrong results
Date: Fri, 17 Mar 2023 20:19:13 -0400 (EDT)

URL:
  <https://savannah.gnu.org/bugs/?63940>

                 Summary: `format native-bit` gives wrong results
                   Group: GNU Octave
               Submitter: arungiridhar
               Submitted: Fri 17 Mar 2023 08:19:11 PM EDT
                Category: Octave Function
                Severity: 3 - Normal
                Priority: 5 - Normal
              Item Group: Incorrect Result
                  Status: None
             Assigned to: None
         Originator Name: 
        Originator Email: 
             Open/Closed: Open
                 Release: dev
         Discussion Lock: Any
        Operating System: Any
           Fixed Release: None
         Planned Release: None


    _______________________________________________________

Follow-up Comments:


-------------------------------------------------------
Date: Fri 17 Mar 2023 08:19:11 PM EDT By: Arun Giridhar <arungiridhar>
The documentation for `format` says:

native-bit
    Print the bit representation of numbers as stored in memory.
    For example, the value of ‘pi’ is

         01000000000010010010000111111011
         01010100010001000010110100011000

    (shown here in two 32 bit sections for typesetting purposes)
    when printed in native-bit format on a workstation which
    stores 8 byte real values in IEEE format with the least
    significant byte first.

Note: least significant **byte** first, not bit.

But running `format native-bit; pi` gives:

0001100010110100001000100010101011011111100001001001000000000010

which is not the same as the help text. In fact it's the exact reverse of what
the help text says should be printed.

To compare, `format bit` prints consistently with the most significant **bit**
first and it gives this:

0100000000001001001000011111101101010100010001000010110100011000

which is identical to the help text for `native-bit`, and therefore
inconsistent with a little-endian byte order in a word. (The results were on
an x86_64, little-endian architecture).

I verified that the `format bit` output is correct for what it claims to
represent in MSB-first order: it represents

(1 + sum(2 .^ -find("1001001000011111101101010100010001000010110100011000" ==
"1"))) * 2 ^ 1

in IEEE 754 format, and evaluates to `pi` as it should.

So it looks like the documentation for `native-bit` is wrong, in that its
example for pi will not show up on little-endian machines but only big-endian
ones like SPARC.

It looks like the code for `format native-bit` is also wrong, because it
should only be returning the least significant byte first and the most
significant byte last, but the order of bits inside a byte should not be
affected by that order of bytes in a word. (Recall the Unicode byte order mark
FFFE shows up as FEFF when switching endianness, not as 7FFF.)

What does Matlab do for `format bit; pi` and `format native-bit; pi`?








    _______________________________________________________

Reply to this item at:

  <https://savannah.gnu.org/bugs/?63940>

_______________________________________________
Message sent via Savannah
https://savannah.gnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]