Home > other >  Difference between C and C# hex values for doubles
Difference between C and C# hex values for doubles

Time:01-29

I am replacing some C code that writes a binary file with C# (net6.0), and I'm noticing an discrepancy between the values written to the file.

If I have a double precision value equal to 0.0, C writes the bytes as:

00 00 00 00 00 00 00 00

However C# (using BinaryWriter) is writing the same value as:

00 00 00 00 00 00 00 80

If I convert 0.0 to bytes using System.BitConverter.GetBytes(), I get all 0x00's. If I convert the bytes with 0x80 to a double using BitConverter, it still gives me 0.0.

byte[] zero = System.BitConverter.GetBytes(0.0); // 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
bool equal = 0.0 == System.BitConverter.ToDouble(new byte[] { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80}); // true

This doesn't seem like it will cause any problems, but I'd like to understand what's going on.

CodePudding user response:

As mentioned in the comments, that trailing 0x80 byte is actually the high byte of the double value. So, the MSB being set in that is the sign bit. This means that the number being stored is actually -0.0 (which, in virtually all cases, compares equal to 0.0, so it shouldn't cause any problems).

In fact, there are other bits (in the exponent) that can bet set to 1 but still leave the value of the represented double as ( /-) zero.

How did we get -0.0? Can't say without seeing the code that generates the value but, from the same Wiki page:

One may obtain negative zero as the result of certain computations, for instance as the result of arithmetic underflow on a negative number (other results may also be possible), or −1.0×0.0, or simply as −0.0.

  •  Tags:  
  • Related