The .NET Framework Decimal type value of 99999999999.999945999 truncates to 99999999999.9999.
Increasing the decimal value to 99999999999.999946999 rounds to 100000000000. Please explain the behavior and why rounding occurs and not truncation?
Why does it truncate or round? What does it not retain the precision/scale 99999999999.999945999 or 99999999999.999946999?
decimal d1 = new decimal(99999999999.999945999); decimal d2 = new decimal(99999999999.999946999);
.NET Framework version: 4.7.2
Edit: Not sure why this question was down-voted. Serious issues can arise if you are not aware of this Decimal type behavior, including losing resolution when persisting to a database via SqlClient.
CodePudding user response:
Your question is justified. It's a choice that Microsoft has made. In the Decimal constructor that takes a Double as a parameter.
It rounds value to 15 significant digits using the nearest rounding. This is done even if the number has more than 15 digits and the least significant digits are zero. If you want to know more.