Home > database >  C Primer Exercise 4.25 converting binary number
C Primer Exercise 4.25 converting binary number

Time:12-05

I have a question regarding the exercise 4.25 in C Primer:

Exercise 4.25: What is the value of ~'q' << 6 on a machine with 32-bit ints and 8 bit chars, that uses Latin-1 character set in which 'q' has the bit pattern 01110001?

I have the solution in binary, but I don't understand how this converts to int:

int main()
{
    cout << (std::bitset<8 * sizeof(~'q' << 6)>(~'q' << 6))<<endl;
    cout << (~'q' << 6) << endl;
    return 0;
}

After executing, the following 2 lines are printed:

11111111111111111110001110000000

-7296

The first line is what I expected, but I don't understand how is it converted to -7296. I would expect a lot larger number. Also online converters give a different result from this.

Thanks in advance for the help.

CodePudding user response:

In order to answer the question, we need to analyze what types are the partial expressions and what is the precedence of the operators in play.

For this we could refer to character constant and operator precedence.

'q' represents an int as described in the first link:

single-byte integer character constant, e.g. 'a' or '\n' or '\13'. Such constant has type int ...

'q' thus is equivalent to the int value of its Latin-1 code (binary 01110001) but expanded to fit a 32-bit integer: 00000000 0000000 00000000 01110001.

The operator ~ precedes the operator << so the bitwise negation will be performed first. The results is 11111111 11111111 11111111 10001110.

Then a bitwise shift left is performed (dropping the left 6 bits of the value and padding with 0-s on the right): 11111111 11111111 11100011 10000000.

Now, regarding your second half of the question: cout << (~'q' << 6) << endl; interpretes this value as an int (signed). The standard states:

However, all C compilers use two's complement representation, and as of C 20, it is the only representation allowed by the standard, with the guaranteed range from −2N−1 to 2N−1−1 (e.g. -128 to 127 for a signed 8-bit type).

The two's complement value for 11111111 11111111 11100011 10000000 on a 32-bit machine results in the binary code for the decimal -7296. The number is not large as you would expect, because when you start from -1 decimal (11111111 11111111 11111111 11111111 binary) and count down, the binary representations all have a lot of leading 1-s. The leftmost bit is 1 for a negative number and 0 for a positive number. When you expand the negative value to more bits (e.g. from 32 to 64), you would add more 1-s to the left until you reach 64 bits. More information can be found here. And here is an online converter.

CodePudding user response:

I don't understand how is it converted to -7296.

It(the second value) is the Decimal from signed 2's complement

CodePudding user response:

~'q' << 6
= (~'q') << 6
= (~113) << 6
= (~(0 0111 0001)) << 6
= 1 1000 1110 << 6
= -7296

You may have forgotten to add some 0's in front of 113.

  • Related