Home > Net >  Printf a double
Printf a double

Time:07-04

i have a problem in printinig a double in C . My code:

#include<stdio.h>

main(){
       double n;
       scanf("%lf",&n);
       printf("%f",n);}

input: 446486416781684178

output: 446486416781684160

why does the number change ?

CodePudding user response:

The number you entered can't be represented exactly in a double.

Typically, a double is represented using IEEE754 double precision format. This format can hold up to 53 bits of precision. The value you entered requires 58 bits of precision. So what is stored is either the next or the previous representable value.

CodePudding user response:

Why does the number change?

It got rounded off, because type double has finite precision.

We're used to roundoff happening to the right of the decimal point. If we write

double d = 0.123456789012345678;
printf("%.18f\n", d);

we are not too surprised if it prints

0.123456789012345677

Type double has the equivalent of about 16 decimal digit's worth of precision (actually it's more complicated than that), so it definitely can't represent all 18 digits of that number 0.123456789012345678.

But your number 446486416781684178 also has 18 significant digits, so we can't be too surprised that it can't be represented exactly, either. In other words, roundoff can happen to the left of the decimal point, also.

Internally, type double can represent numbers with 53 bits of precision. That means it can represent integers up to 253, or 9007199254740992, with perfect accuracy. But bigger than that — it just can't! It can represent 9007199254740994, but if you try to do 9007199254740993, it gets rounded down to 9007199254740992. If we look at the binary representations of these and nearby numbers, we can see why:

Decimal Binary
9007199254740990  11111111111111111111111111111111111111111111111111110
9007199254740991  11111111111111111111111111111111111111111111111111111
9007199254740992 100000000000000000000000000000000000000000000000000000
9007199254740993 100000000000000000000000000000000000000000000000000001
9007199254740994 100000000000000000000000000000000000000000000000000010

Since we only have 53 bits of significance, for a 54-bit number like 9007199254740992 or 9007199254740994, the 54th bit has to be 0, which basically means we can only represent even numbers in that range. When we get up to a 59-bit number like 446486416781684178, the last six bits have to be 0, which means we can only represent numbers which are a multiple of 26, or 64:

Decimal Binary
446486416781684160 11000110010001111010000011111001101010011110010110111000000
446486416781684161 11000110010001111010000011111001101010011110010110111000001
... ...
446486416781684177 11000110010001111010000011111001101010011110010110111010001
446486416781684178 11000110010001111010000011111001101010011110010110111010010
446486416781684179 11000110010001111010000011111001101010011110010110111010011
... ...
446486416781684223 11000110010001111010000011111001101010011110010110111111111
446486416781684224 11000110010001111010000011111001101010011110010111000000000
  • Related