Home > Enterprise >  Convert float to decimal, why need explicit conversion?
Convert float to decimal, why need explicit conversion?

Time:10-25

i have known the decimal type is more preciser than float, so i think it should be reasonable that converting float to decimal will like converting float to double. But in realty, it doesn't true.

float a = 1.1f;
double d = a;

decimal c = (decimal)d; // true
decimal e = (decimal)a; // true

decimal f = a; // error

CodePudding user response:

  • A float has the approximate range ±1.5 x 10−45 to ±3.4 x 1038 and a precision of 6-9 digits.
  • A decimal has the approximate range ±1.0 x 10-28 to ±7.9228 x 1028 and a precision of 28-29 digits.

It would cause an exception to assign a float to a decimal if the float is outside the decimal's range.

Therefore you get a compile error if you try to do that without casting. C# errs on the side of caution in these cases.

Here we can see the difference between range and precision.

A float has greater range than a decimal and a decimal has greater precision than a float.

(Note that you also get a compile error if you try to assign a decimal to a float without casting, but this time because it might lose data rather than because it would throw an exception.)

  • Related