Home > OS >  Why .0 is dropped in division but not in , - and *?
Why .0 is dropped in division but not in , - and *?

Time:10-08

So, as per C# rules, in the division (int & int division) the output is an int. While in (FP & int division), it is a FP.

but following generate different outputs.

int a = 45; //integer
decimal b = 5.0m; //floating point
Console.WriteLine(a/b); // "9"

Output is: 9 //output looks like an integer??? why?

Note that , -, and * produce expected result:

  Console.WriteLine(a * b); // "225.0"

Dividing by non whole number produce expected result:

int a = 45; //integer
decimal b = 5.5m; //floating point
Console.WriteLine(a/b); // "8.181818181818181818"

Output is: 8.181818181818181818 - / output is a FP which is okay.

Can anyone explain this?

The results are more consistent for float / double - no zeros in output for all operations (which makes sense as those types don't store infomration on number of digits after decimal point)

CodePudding user response:

The C# standard goes into details here, in section 12.9.3.

The scale of the result, before any rounding, is the closest scale to the preferred scale that will preserve a result equal to the exact result. The preferred scale is the scale of x less the scale of y.

So, to apply that, we've got x with a value of 45m (after an implicit conversion to decimal) which has a scale of 0, and 5.0m which has a scale of 1.

Therefore the preferred scale is -1 - which would be invalid. (The scale is always non-negative.) The closest scale that can preserve the exact result is 0, so that's the actual scale - the result is equivalent to 9m rather than 9.0m.

CodePudding user response:

I tested and got this result

int a = 45; //integer
    decimal b = 5.0m; //floating point
    var r=a/b; // r is decimal
    Console.WriteLine(r);  // 9


 a = 45; //integer
     b = 5.5m; //decimal floating point
     r=a/b;   // r is decimal
    Console.WriteLine(r);  // 8.181818181818181818181818182

as I could see in a debuger in both casses output was decimal, not integer in the first case too

A compiler makes an implicit conversion if a result is not defined explicetly in the code. There are a lot of ways to cast types. The main rule here is 'if there are several operands in the expression, then the type of the result is the largest of the operand types'. For example, if you divide int and double, the result is double. If you multiply byte and int, the result is int.

More examples ( now with multiply)

    int a = 45; //integer
    
    decimal b = 5.0m; //floating point
    var rm = a*b; // rm is decimal
    Console.WriteLine(rm);  // 225.0
        
    b = 5.5m; //decimal floating point
    var r2m = a*b; // r2m is decimal
    Console.WriteLine(r2m);  // 247.5

UPDATE

Since the question was changed significantly by @AlexeiLevenkov and he is asking me in comments why .0 is dropped in division, these examples show that it is not true

 int a = 5; //integer


 decimal b = 45m; //floating point
 var r= b/a; // r is decimal
Console.WriteLine(r);  //  9


 b = 45.0m; //floating point
  r= b/a; // r is decimal
Console.WriteLine(r);  //  9.0

 b = 45.00m; //floating point
 var r= b/a; // r is decimal
Console.WriteLine(r);  //  9.00

 b = 45.000m; //floating point
 var r= b/a; // r is decimal
Console.WriteLine(r);  //  9.000

These examples show that by default a quotient has as many zerous as a dividend has.

UPDATE 2

if you try this, you will get error "Can not implicitly convert type decimal to int

int r = a/b; // error!!!

but this is OK

int r = a/(int)b; 

and this is OK too

decimal r = a/b; 
  • Related