Home > Net >  Float results difference on sin function compiled with g on two versions of ubuntu
Float results difference on sin function compiled with g on two versions of ubuntu

Time:10-15

I have tested my code developed on a ubuntu 18.04 bionic docker image on a ubuntu 20.04 focal docker image. I saw that there were a problem with my unit test and I have narrowed the root cause to a simple main.cpp

#include <iostream>
#include <iomanip>
#include <math.h>
int main()
{
    const float DEG_TO_RAD_FLOAT = float(M_PI / 180.);
    float theta = 22.0f;
    theta = theta * DEG_TO_RAD_FLOAT;
    std::cout << std::setprecision(20) << theta << ' ' << sin(theta) << std::endl;
    return 0;
}

On the bionic docker image, I have upgraded my version of g using the commands :

sudo apt-get install -y software-properties-common
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt install -y gcc-9 g  -9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-9 90 --slave /usr/bin/g   g   /usr/bin/g  -9 --slave /usr/bin/gcov gcov /usr/bin/gcov-9
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 70 --slave /usr/bin/g   g   /usr/bin/g  -7 --slave /usr/bin/gcov gcov /usr/bin/gcov-7

My version of g are the same : 9.4.0.

On ubuntu 18.04, the program outputs : 0.38397243618965148926 0.37460657954216003418

On ubuntu 20.04, the program outputs : 0.38397243618965148926 0.37460660934448242188

As you can see the difference is on the sin(theta), on the 7th decimal. The only difference I can think of is the version of libc which is 2.27 on the ubuntu 18.04 and 2.31 on the ubuntu 20.04.

I have tried several g options -mfpmath=sse, -fPIC,-ffloat-store, -msse, -msse2 but it had no effects.

The real problem is that on my Windows version of the code compiled with /fp:precise, I get the same results than the Ubuntu 18.04 : 0.38397243618965148926 0.37460657954216003418

Is there any way to force the g compiler to keep the same results as my Windows compiler please?

CodePudding user response:

Whether or not there is any guarantee that the exact result of calls to the mathematical functions stay consistent with version changes aside, you are also relying on unspecified behavior.

Specifically, you are including <math.h> in a C program. This will make sin from the C standard library available in the global namespace scope, but it is unspecified whether or not it will make the sin overloads from the C standard library available in the global namespace scope.

C's sin function operates on doubles, while C adds an overload for float. So it is unspecified whether you are calling the overload operating on double or the one operating on float. Depending on that you will get a differently rounded result.

To guarantee a call to the float overload include <cmath> instead and call std::sin instead of sin.

Also, depending on optimization flags, GCC may not actually call the sin function and constant-fold the value itself. In that case the result may have a different rounding or accuracy.

CodePudding user response:

Well, investigating a slightly modified version of your test program:

#include <iostream>
#include <iomanip>
#include <cmath>
int main()
{
    const float DEG_TO_RAD_FLOAT = float(M_PI / 180.);
    float theta = 22.0f;
    theta = theta * DEG_TO_RAD_FLOAT;
    std::cout << std::setprecision(20) << theta << ' ' << std::sin(theta) 
      << ' ' << std::hexfloat << std::sin(theta) << std::endl;
    return 0;
}

The changes are that 1) use cmath and std::sin instead of math.h, and 2) also print the hex representation of the calculated sine value. Using GCC 11.2 on Ubuntu 22.04 here.

Without optimizations I get

$ g   prec1.cpp
$ ./a.out 
0.38397243618965148926 0.37460660934448242188 0x1.7f98ep-2

which is the result you got on Ubuntu 20.04. With optimization enabled, however:

$ g   -O2 prec1.cpp
$ ./a.out 
0.38397243618965148926 0.37460657954216003418 0x1.7f98dep-2

which is what you got on Ubuntu 18.04.

So why does it produce different results depending on optimization level? Investigating the generated assembler code gives a clue:

$ g   prec1.cpp -S
$ grep sin prec1.s
    .section    .text._ZSt3sinf,"axG",@progbits,_ZSt3sinf,comdat
    .weak   _ZSt3sinf
    .type   _ZSt3sinf, @function
_ZSt3sinf:
    call    sinf@PLT
    .size   _ZSt3sinf, .-_ZSt3sinf
    call    _ZSt3sinf
    call    _ZSt3sinf

So what does this mean? Well, it calls sinf (which lives in libm, the math library part of glibc). Now, for the optimized version:

$ g   -O2 prec1.cpp -S
$ grep sin prec1.s
$ 

Empty! What does that mean? It means that rather than calling sinf at runtime, the value was computed at compile time (GCC uses the MPFR library for constant folding floating point expressions).

So the results differ because, depending on the optimization level, one is using two different implementations of the sine function.

Now, finally, lets look at the hex values my modified test program printed. You can see the unoptimized value ends in e0 (the zero not being printed since it's a fractional value) vs de for the optimized one. If my mental hex arithmetic is correct, that is a difference of 2 ulp, and well, you can't really expect implementations of trigonometric functions to differ by less than that.

  • Related