It appears that JavaScript's number
type is exactly the same as C and C 's double type, and both are IEEE 754-1985.
JavaScript can use IEEE 754 as integers but when the number becomes big or gets an arithmetic calculation such as divided by 10
or by 3
, it seemed like it can switch into floating point mode. Now C and C only use IEEE 754 as double
and therefore only use the floating point portion and do not use the "integer" portion. Therefore, do C and C left the integer representations unused?
(and C left the NaN
, Infinite
, -Infinite
, -0
unused as I recalled never using them in C).
CodePudding user response:
If that's the case, isn't it true that the IEEE 754's representations of [integers and some special values] were all unused, as C and C didn't have the capability of referencing them?
This notion appears as if it might stem from the fact that JavaScript uses the IEEE-754 binary64 format for all numbers and performs (or at least defines) bitwise operations by converting the binary64 format to an integer format for the actual operation. (For example, a bitwise AND in JavaScript is defined, via the ECMAScript specification, as the AND of the bits obtained by converting the operands to a 32-bit signed integer.)
C and C do not use this model. Floating-point and integer types are separate, and values are not kept in a common container. C and C evaluate expressions based on the types of the operands and do so differently for integer and floating-point operations. If you have some variable x
with a floating-point value, it has been declared as a floating-point type, and it behaves that way. If some variable y
has been declared with an integer type, it behaves as an integer type.
C and C do not specify that IEEE 754 is used, except that C has an optional annex that specifies the equivalent of IEEE 754 (IEC 60559), and C and C implementations may choose to conform use IEEE-754 formats and to conform to it. The IEEE-754 binary64 format is overwhelmingly used for double
by C and C implementations, although many do not fully conform to IEEE-754 in their implementation.
In the binary64 format, the encoding as a sign bit S, an 11-bit “exponent” code E, and a 52-bit “significand code,” F (for “fraction,” since S for significand is already taken for the sign bit). The value represented is:
- If E is 2047 and F is not zero, the value represented is NaN. The bits of F may be used to convey supplemental information, and S remains an isolated sign bit.
- If E is 2047 and F is zero, the value represented is ∞ or −∞ according to whether S is 0 or 1.
- If E is neither 0 nor 2047, the value represented is (−1)S•(1 F/252)•2E−1023.
- If E is zero, the value represented is (−1)S•(0 F/252)•21−1023. In particular, when S is 1 and F is 0, the value is said to be −0, which is equal to but distinguished from 0.
These representations include all the integers from −253−1 to 253−1 (and more), both infinities, both zeros, and NaN.
If a double
has some integer value, say 123, then it simply has that integer value. It does not become an int
and is not treated as an integer type by C or C .
But from (-253 - 1) to (253 - 1), that's a lot of numbers unused…
There are no encodings unused in the binary64 format, except that one might consider the numerous NaN encodings wasted. Indeed many implementations do waste them by making them inaccessible or hard to access by programs. However, the IEEE-754 standard leaves them available for whatever purposes users may wish to put them to, and there are people who use them for debugging information, such as recording the program counter where a NaN was created.
CodePudding user response:
The int
number 123
is exactly the same as the double
number 123.0
, as you can easily see by testing 123 == 123.0
. Their representations are different internally though.