Home > OS >  A question about type conversion in C language
A question about type conversion in C language

Time:12-22

How to understand the conversion of character '!' to int type in C language is the process of converting from 1-byte 33 to 4-byte 33

I know that 33 is an Ascill table value, but why do they use different storage spaces but both are 33? I'm very confused

CodePudding user response:

Binary numbers on computers have fixed number of bits. That means, they have leading zeroes. Without leading zeroes, 33 = 1*32 0*16 0*8 0*4 0*2 1*1 = binary 100001 unsigned.

With 8 bits char type, this is binary 00100001.

With 32 bits int type, this is binary 00000000 00000000 00000000 00100001.

As long as the leading bits as 0, nothing is lost or added when you add or truncate leading zeroes. If you need to touch 1-bits, then you change the value.

For a real world base-10 analogue, think of a mechanical step counter "clicker" with a button. All the digits are there, physically. Or think of a digital clock, where at least the minutes always have leading zeroes, like 12:01 instead of 12:1. It's same with computers, all the bits of a byte are there, in the memory chip, physically.


Signed integers have just a bit more complexity. Basically, negative numbers have leading ones instead of leading zeroes, so again if you truncate or extend nothing is changed, as long as the leftmost bit doesn't change (it must be 1 for negative, 0 for positive).

So if you have 100001 on paper (in a normal computer you never have just 6 bits, you have at least 1 byte, which is 8 bits in any computer you are ever likely to use), and know it is signed, it is actually negative, and extending to 8 bits would be 11100001. So you gotta "know" if a binary number is negative or positive, you can not see it from the binary number itself.

CodePudding user response:

How do you write numbers? Just imagine that computers use fixed length fields (like fields in administrative paper forms) to represent numbers. While computers use base-2 representation, it is just like writing 33 in a field of length 2 or in a field of length 4, or 8, etc.

| 3 | 3 |
| 0 | 0 | 3 | 3 |
| 0 | 0 | 0 | 0 | 0 | 0 | 3 | 3 |

Same number, different writings.

Types in programming languages is a matter of representation of things/storage (in memory) and operating on them/operator semantic (with machine instructions).

int is basically a type that represents common (not too big) integer values with arithmetic operators on them.

char is a type that is basically very short integer values, with additional translation (ASCII) from/to char literals, input/output.

  • Related