Home > Net >  Floating point binary representation on macOS
Floating point binary representation on macOS

Time:09-12

I wrote a small program to see how my computer stores floating point numbers (this is on macOS Big Sur), and results looked like this:

Float value = 2.000000
Memory representation: 00000000 00000000 00000000 01000000

Float value = 4.000000
Memory representation: 00000000 00000000 10000000 01000000

Float value = 8.000000
Memory representation: 00000000 00000000 00000000 01000001

Float value = 16.000000
Memory representation: 00000000 00000000 10000000 01000001

Float value = 32.000000
Memory representation: 00000000 00000000 00000000 01000010

I believe I am printing out the bits correctly, so my question is how is macOS storing each part of the floating point number? Meaning how should each of these bit strings be divided into the exponent, fractional part, and sign? It also looks like the sign bit is stored at the beginning of the last byte, rather than at the beginning of the first byte like I was expecting:

Float value = 2.000000
Memory representation: 00000000 00000000 00000000 01000000

Float value = -2.000000
Memory representation: 00000000 00000000 00000000 11000000

CodePudding user response:

You appear to be printing the bits with a reversed byte order. Apple tools used IEEE-754 binary32 for the float type. IEEE-754 only specifies the encoding to bit strings; it does not specify how the bits of the string are ordered in memory. Apple tools store the bit strings in memory as if the bits were interpreted as a binary numeral (most significant bit first) and an unsigned int with that value were written to memory.

The bit string that encodes 2 is 01000000 00000000 00000000 00000000.

The bit string that encodes 4 is 01000000 10000000 00000000 00000000.

The bit string that encodes 8 is 01000001 00000000 00000000 00000000.

The bit string that encodes 16 is 01000001 10000000 00000000 00000000.

The bit string that encodes 32 is 01000010 00000000 00000000 00000000.

In the strings as shown above, the first bit encodes the sign. The next eight bits encode the exponent and the first bit of the significand. The last 23 bits encode the remaining bits of the significand.

For 32, the first bit is 0, meaning or (−1)0 = 1.

The next eight bits are 10000100. Taken as binary, these represent a value of 132. That encodes the exponent with a bias of 127, so the exponent represented is 132−127 = 5, so the scale of the floating point number is 25. Additionally, the fact that these bits are not all zeros (nor all ones) means the leading bit of the significand is 1.

The remaining 23 bits are zeros, so the remaining bits of the significand are zeros. Thus the significand, in binary, is 1.00000000000000000000000.

So the number represented is 1•25•1.00000000000000000000000 = 32.

  • Related