I'm trying to write a Bash function that is the inversion of the function in the answer https://stackoverflow.com/a/72687565/1277576.
The purpose is to obtain the decimal representation of a number from its binary representation in two's complement.
dec() {
n=$(getconf LONG_BIT)
x=$(echo "ibase=2; $1" | bc)
echo "if ($x<2^($n-1)) $x else -$((~$x))-1" | bc
}
My issue is that it works only for negative binary integers (that is, when the most significant bit is equal to 1), while if fails for positive ones (that is, when the most significant bit is equal to 0):
$ dec 1111111111111111111111111111111111111111111111111111111111111111
-1
$ dec 1000000000000000000000000000000000000000000000000000000000000000
-9223372036854775808
$ dec 0000000000000000000000000000000000000000000000000000000000000001
(standard_in) 1: syntax error
It seems that the line echo "if ($x<2^($n-1)) $x else -$((~$x))-1" | bc
contains a syntax error, but I don't understand what it is.
CodePudding user response:
$ dec() {
printf 'n=%d; ibase=2; v=%s; v-2^n*(v/2^(n-1))\n' "$(getconf LONG_BIT)" "$1"| bc
}
$ dec 1111111111111111111111111111111111111111111111111111111111111111
-1
$ dec 0000000000000000000000000000000000000000000000000000000000000001
1
CodePudding user response:
The solution:
dec() {
n=$(getconf LONG_BIT)
x=$(echo "ibase=2; $1" | bc)
echo "if ($x<2^($n-1)) $x else -($((~$x)) 1)" | bc
}
The problem was the double minus sign, as pointed in the comment Conditional IF statement syntax in BC.