Why does the C code below output "Difference: 0.000000" ? I need to make calculations with many decimals in one of my university tasks and I don't understand this because I'm new to programming in C. Am I using the correct type? Thanks in advance.
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <math.h>
int main() {
long double a = 1.00000001;
long double b = 1.00000000;
long double difference = a-b;
printf("Difference: %Lf", difference);
}
I have tried that code and I'm expecting to get the result: "Difference: 0.00000001"
CodePudding user response:
You see 0.000000
because %Lf
prints a fixed number of decimal places, and the default number is 6. In your case, the difference is 1 in the 8th decimal place, which shows as 0.000000
when printed to 6 d.p. Either use %Le
or %Lg
or specify more precision: %.8Lf
.
#include <stdio.h>
int main(void)
{
long double a = 1.00000001;
long double b = 1.00000000;
long double difference = a - b;
printf("Difference: %Lf\n", difference);
printf("Difference: %.8Lf\n", difference);
printf("Difference: %Le\n", difference);
printf("Difference: %Lg\n", difference);
return 0;
}
Note the minimal set of headers.
Output:
Difference: 0.000000
Difference: 0.00000001
Difference: 1.000000e-08
Difference: 1e-08
CodePudding user response:
#include <stdio.h>
int main() {
long double a = 1.000000001;
long double b = 1.000000000;
long double difference = a-b;
printf("Difference: %.9Lf\n", difference);
}
Try this code. Actually, you need to specify to the compiler how much precision you need after the decimal point. Here the .9
will print 9 digits after the decimal point. You can adjust this value according to your needs; just don't exceed the range of the variable.