Here is my code:
using static System.Console;
namespace ConsoleApp2
{
internal class Program
{
static void Main(string[] args)
{
double[] doubles = new[] { 9.05, 9.15, 9.25, 9.35, 9.45, 9.55, 9.65, 9.75, 9.85, 9.95 };
foreach (double n in doubles)
{
WriteLine("{0} ===> {1:F1}", n, n);
}
}
}
}
Output in .NET Framework 4.7.2
:
9.05 ===> 9.1
9.15 ===> 9.2
9.25 ===> 9.3
9.35 ===> 9.4
9.45 ===> 9.5
9.55 ===> 9.6
9.65 ===> 9.7
9.75 ===> 9.8
9.85 ===> 9.9
9.95 ===> 10.0
Output in .NET 6
(with same code):
9.05 ===> 9.1
9.15 ===> 9.2
9.25 ===> 9.2
9.35 ===> 9.3
9.45 ===> 9.4
9.55 ===> 9.6
9.65 ===> 9.7
9.75 ===> 9.8
9.85 ===> 9.8
9.95 ===> 9.9
So, in .NET Framework, the numbers were rounded just like we were taught in school. Which is called round half up
in Wikipedia.
But in .NET 6, 9.05, 9.15, 9.55, 9.65, 9.75
were rounded up, while 9.25, 9.35, 9.45, 9.85, 9.95
were rounded down.
I know there a rule called round half to even
– rounds to the nearest value; if the number falls midway, it is rounded to the nearest value with an even least significant digit.
But this is obviously not round half to even
, some numbers were round to odd.
How can we explain the difference in .NET Framework 4.7.2 with .NET 6 and how can I just round the numbers in the same way as .NET Framework in .NET 6?
CodePudding user response:
Use decimal, not double, otherwise you're not starting with the exact values you think you are, and you get the expected results.
9.05 ===> 9.1
9.15 ===> 9.2
9.25 ===> 9.3
9.35 ===> 9.4
9.45 ===> 9.5
9.55 ===> 9.6
9.65 ===> 9.7
9.75 ===> 9.8
9.85 ===> 9.9
9.95 ===> 10.0
CodePudding user response:
The Microsoft documentation have this info carefully hidden in the Standard numeric format strings page (it's probably elsewhere as well, but not in the Double.ToString docs).
Here's the important excerpt, for posterity:
When precision specifier controls the number of fractional digits in the result string, the result string reflects a number that is rounded to a representable result nearest to the infinitely precise result. If there are two equally near representable results:
On .NET Framework and .NET Core up to .NET Core 2.0, the runtime selects the result with the greater least significant digit (that is, using MidpointRounding.AwayFromZero).
On .NET Core 2.1 and later, the runtime selects the result with an even least significant digit (that is, using MidpointRounding.ToEven).
Since .Net 5 and later mostly continue the Core line despite Microsoft's confusing statements about how they've been merged, that'll pretty clearly fall under the 2nd case.