I have some cross platform code I'm working with. On the Mac it's compiled with Clang, on Windows it's compiled with Visual C .
There is a calculation that can be sensitive, and there was a difference between Mac and Windows that was triggering asserts. It ends up there is a difference between acos results, but I'm not clear why.
On both platforms, the input to acos is exactly -1.0f. In Visual C , acos(-1.0f)
is 3.14159274. That's the value of pi as a float, which is what I'd expect.
But on macOS:
float value = acos(-1.0f);
...evaluates to 3.1415925. Thats just enough of an accuracy difference to trigger issues in the code. acos is an operation that can be imprecise with float - I understand that. And different compilers can have different implementations of acos. I'm just unclear why Clang seems to have issues with such a simple acos result while Visual C doesn't. A float is capable of representing 3.14159274, but that's not the result I'm getting.
It is possible to get an accurate/Visual C aligned value out of Xcode's version of Clang with:
float value = (float)acos((double)-1.0f);
So I can fix the issue by moving to higher accuracy, and then down casting the value back to float to preserve the same rounding as Windows. I'm just looking for a justification as to why the extra precision is necessary when the VC compiler doesn't seem to have a precision issue. It could be differences between the Clang/Xcode/VC math libraries as well. I just assumed that acos(-1.0) might be more settled across the compilers. I couldn't find any difference in round modes (even though rounding should not be necessary) and fresh projects in Xcode and Visual Studio show the same difference. Both machines are Intel.
CodePudding user response:
If you look at the binary representation of these floating point values you can see that the mac/clang's value A
is the next lowest floating-point number than windows/msvc's value B
A 3.14159250 0x40490FDA
B 3.14159274 0x40490FDB
Whilst B is closest to the true value of π, it is actually greater than π as @njuffa points out in their comment.
Reading the specification, it looks like acosf
is supposed to return a value in the closed range [0,π]. Technically A
meets this criteria whilst B
doesn't.
In summary -
A
is the closest value to, but less than, πB
is the closest value to π
The difference in these may be as a result of a deliberate decision of the respective standard library implementors.
I'd also observe that both values are true inverses of cosf
as both cosf(A)
and cosf(B)
equal -1.0f
.
Generally speaking, though, it is unwise to rely on exact bit-level accuracy with any floating point calculations. If you are not already aware of it, the document What Every Computer Scientist Should Know About Floating-Point Arithmetic explains why.