I'm trying to understand something about sin
and sinf
from math.h
.
I understand that their types differ: the former takes and returns double
s, and the latter takes and returns float
s.
However, GCC still compiles my code if I call sin
with float
arguments:
#include <stdio.h>
#include <math.h>
#define PI 3.14159265
int main ()
{
float x, result;
x = 135 / 180 * PI;
result = sin (x);
printf ("The sin of (x=%f) is %f\n", x, result);
return 0;
}
By default, all compiles just fine (even with -Wall
, -std=c99
and -Wpedantic
; I need to work with C99). GCC won't complain about me passing floats to sin
. If I enable -Wconversion
then GCC tells me:
warning: conversion to ‘float’ from ‘double’ may alter its value [-Wfloat-conversion]
result = sin (x);
^~~
So my question is: is there a float
input for which using sin
, like above, and (implicitly) casting the result back to float
, will result in a value that is different from that obtained using sinf
?
CodePudding user response:
This program finds three examples on my machine:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
int i;
float f, f1, f2;
for(i = 0; i < 10000; i ) {
f = (float)rand() / RAND_MAX;
float f1 = sinf(f);
float f2 = sin(f);
if(f1 != f2) printf("jackpot: %.8f %.8f %.8f\n", f, f1, f2);
}
}
I got:
jackpot: 0.98704159 0.83439910 0.83439904
jackpot: 0.78605396 0.70757037 0.70757031
jackpot: 0.78636044 0.70778692 0.70778686
CodePudding user response:
This will find all the float input
values in the range 0.0
to 2 * M_PI
where (float)sin(input) != sinf(input)
:
#include <stdio.h>
#include <math.h>
#include <float.h>
#ifndef M_PI
#define M_PI 3.14159265358979323846
#endif
int main(void)
{
for (float in = 0.0; in < 2 * M_PI; in = nextafterf(in, FLT_MAX)) {
float sin_result = (float)sin(in);
float sinf_result = sinf(in);
if (sin_result != sinf_result) {
printf("sin(%.*g) = %.*g, sinf(%.*g) = %.*g\n",
FLT_DECIMAL_DIG, in, FLT_DECIMAL_DIG, sin_result,
FLT_DECIMAL_DIG, in, FLT_DECIMAL_DIG, sinf_result);
}
}
return 0;
}
There are 1020963 such inputs on my amd64 Linux system with glibc 2.32.
CodePudding user response:
float
precision is approximately 6 significant figures decimal, while double
is good for about 15. (It is approximate because they are binary floating point values not decimal floating point).
As such for example: a double
value 1.23456789
will become 1.23456xxx
as a float
where xxx
are unlikely to be 789
in this case.
Clearly not all (in fact very few) double
values are exactly representable by float
, so will change value when down-converted.
So for:
double a = 1.23456789 ;
float b = a ;
printf( "double: %.10f\n", a ) ;
printf( "float: %.10f\n", b ) ;
The result in my test was:
double: 1.2345678900
float: 1.2345678806
As you can see the float
in fact retained 9 significant figures in this case, but it is by no means guaranteed for all possible values.
In your test you have limited the number of instances of mismatch because of the limited and finite range of rand()
and also because f
itself is float
. Consider:
int main()
{
unsigned mismatch_count = 0 ;
unsigned iterations = 0 ;
for( double f = 0; f < 6.28318530718; f = 0.000001)
{
float f1 = sinf(f);
float f2 = sin(f);
iterations ;
if(f1 != f2)
{
mismatch_count ;
}
}
printf("%f%%\n", (double)mismatch_count/iterations* 100.0);}
In my test about 55% of comparisons mismatched. Changing f
to float
, the mismatches reduced to 1.3%.
So in your test, you see few mismatches because of the constraints of your method of generating f
and its type. In the general case the issue is much more obvious.
In some cases you might see no mismatches - an implementation may simply implement sinf()
using sin()
with explicit casts. The compiler warning is for the general case of implicitly casting a double
to a float
without reference to any operations performed prior to the conversion.