This is a very noob question, but I am curious to know the reason behind this: -If I debug the following C code:
void floatreturn(float i){
//nothing
}
int main(){
float a = 23.976;
floatreturn(a);
return 0;
}
Monitoring the passed value of a, it appears to be 23.9759998 when entering floatreturn. As a result, any processing of the value in the function would require to manually tweak the precision. Is there a reason for this, and any way to avoid it?
CodePudding user response:
The issue happened before floatreturn(a);
.
It happened as float a = 23.976;
floatreturn(a);
is irrelevant.
There are about 2^32 different values that float
can encode exactly. 23.976 is not one of them. The nearest encodable float
is about 23.9759998...
To avoid, use values that can exactly encode as a float
or tolerate being close - about 1 part in 224