I've got some code I'm using to do comparisons, and I want to start with infinite values. Here's a snippet of my code.
import (
"fmt"
"math"
)
func snippet(arr []int) {
least := int(math.Inf(1))
greatest := int(math.Inf(-1))
fmt.Println("least", math.Inf(1), least)
fmt.Println("greatest", math.Inf(-1), greatest)
}
and here's the output I get from the console
least Inf -9223372036854775808
greatest -Inf -9223372036854775808
why is Inf
coerced into a negative int
?
CodePudding user response:
Infinity is not representable by int
.
According to the go spec,
In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
Maybe you are looking for the largest representable int
? How to get it is explained here.
CodePudding user response:
math.Inf()
returns an IEEE double-precision float representing positive infinity if the sign of the argument is >= 0, and negative infinity if the sign is < 0, so your code is incorrect.
But, the Go language specifiction (always good to read the specifications) says this:
Conversions between numeric types . . . In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
Two's complement integer values don't have the concept of infinity, so the result is implementation dependent.
Myself, I'd have expected to get the largest or smallest integer value for the integer type the cast is targeting, but apparently that's not the case.
This looks to the runtime source file responsible for the conversion, https://go.dev/src/runtime/softfloat64.go
And this is the actual source code.
Note that an IEEE-754 double-precision float is a 64-bit double word, consisting of
- a sign bit, the high-order (most significant/leftmost bit), 0 indicating positive, 1 indicating negative.
- an exponent (biased), consisting of the next 11 bits, and
- a mantissa, consisting of the remaining 52 bits, which can be denormalized.
Positive Infinity is a special value with a sign bit of 0
, a exponent of all 1
bits, and a mantissa of all 0
bits:
0 11111111111 0000000000000000000000000000000000000000000000000000
or 0x7FF0000000000000
.
Negative infinity is the same, with the exception that the sign bit is 1:
1 11111111111 0000000000000000000000000000000000000000000000000000
or 0xFFF0000000000000
.
Looks like `funpack64() returns 5 values:
- a
uint64
representing the sign (0
or the very large non-zero value0x8000000000000000
), - a
uint64
representing the normalized mantissa, - an
int
representing the exponent, - a
bool
indicating whether or not this is /- infinity, and - a
bool
indicating whether or not this isNaN
.
From that, you should be able to figure out why it returns the value it does.
[Frankly, I'm surprised that f64toint()
doesn't short-circuit when funpack64()
returns fi = true
.]
const mantbits64 uint = 52
const expbits64 uint = 11
const bias64 = -1<<(expbits64-1) 1
func f64toint(f uint64) (val int64, ok bool) {
fs, fm, fe, fi, fn := funpack64(f)
switch {
case fi, fn: // NaN
return 0, false
case fe < -1: // f < 0.5
return 0, false
case fe > 63: // f >= 2^63
if fs != 0 && fm == 0 { // f == -2^63
return -1 << 63, true
}
if fs != 0 {
return 0, false
}
return 0, false
}
for fe > int(mantbits64) {
fe--
fm <<= 1
}
for fe < int(mantbits64) {
fe
fm >>= 1
}
val = int64(fm)
if fs != 0 {
val = -val
}
return val, true
}
func funpack64(f uint64) (sign, mant uint64, exp int, inf, nan bool) {
sign = f & (1 << (mantbits64 expbits64))
mant = f & (1<<mantbits64 - 1)
exp = int(f>>mantbits64) & (1<<expbits64 - 1)
switch exp {
case 1<<expbits64 - 1:
if mant != 0 {
nan = true
return
}
inf = true
return
case 0:
// denormalized
if mant != 0 {
exp = bias64 1
for mant < 1<<mantbits64 {
mant <<= 1
exp--
}
}
default:
// add implicit top bit
mant |= 1 << mantbits64
exp = bias64
}
return
}