For example,
0.0000000000000000000000000001
is represented as (lo mid hi flags):
1 0 0 1c0000
When the above is divided by 10, the result is (lo mid hi flags)
0 0 0 0
But when it is multiplied by 0.1M, the result is (lo mid hi flags)
0 0 0 1c0000
In other words, according to Decimal, 0.0000000000000000000000000001 multiplied by 0.1 is 0.0000000000000000000000000000. But divided by 10 it is 0.
The following shows different results:
var o = 0.0000000000000000000000000001M;
Console.WriteLine($"{o * 0.1M}");
Console.WriteLine($"{o / 10M}");
I need to be able to replicate this behaviour and all other Decimal arithmetic in a virtual machine. Can someone point me to a spec or explain the rationale? System.Decimal.cs
does not seem to offer insights.
CodePudding user response:
The language spec says
The result of an operation on values of type decimal is that which would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the representation. Results are rounded to the nearest representable value, and, when a result is equally close to two representable values, to the value that has an even number in the least significant digit position (this is known as “banker’s rounding”). That is, results are exact to at least the 28th decimal place. Note that rounding may produce a zero value from a non-zero value.
Decimal has a precision of 28 decimal places. The nearest representable value in your example is zero.
decimal d28 = 1e-28m; // 0.0000000000000000000000000001
d28 / 10
result: 0
.
The class implementation is available here. Math operators are implemented in a helper class (DecCalc) here.
Minor note from the source (int[] bits
constructor), about different representations (significant digits) being numerically equivalent
// Note that there are several possible binary representations for the
// same numeric value. For example, the value 1 can be represented as {1,
// 0, 0, 0} (integer value 1 with a scale factor of 0) and equally well as
// {1000, 0, 0, 0x30000} (integer value 1000 with a scale factor of 3).
// The possible binary representations of a particular value are all
// equally valid, and all are numerically equivalent.