I wrote a simple binary search code to find the SQRT(x).
public int mySqrt(int x) {
if(x <= 1) return x;
int low = 0, high = x >> 1;
while(low <= high){
int mid = (low high) >>> 1;
long sq = (long) mid * mid;
if(sq > x) high = mid - 1;
else if(sq < x) low = mid 1;
else return mid;
}
return high;
}
For this sample input:
Input:
2147395599
Output:
1073697799
Expected:
46339
My code gives wrong output when I write as:
long sq = (long) (mid * mid);
instead of
long sq = (long) mid * mid;
Why does it occur?
CodePudding user response:
For long sq = (long) (mid * mid);
, you are first multiplying mid
by itself. As mid
is an int
, this is treated as the multiplication of two int
s. As your input can get large enough, this results in integer overflow as it surpasses the maximum value an int
can store.
Whereas, for long sq = (long) mid * mid;
, you are first converting mid
to a long
, then multiplying it by mid
. This is treated as the multiplication of two long
s, which for the values of your input, happen to fit within the maximum value a long
can store.
Hence, the second and only the second gives the right output.
CodePudding user response:
Integer overflow. In the first snippet, you compute the product of two int
s (which may be bigger than what can fit in an int
) and then convert the result to a long.
In the second, you first promote mid
to a long
and then multiply by an int
(which promotes it to a long
), with the resulting value being a long
.