I have been reading some definitions on the internet that usually says that for example: " ...Since on most computers “int” data type is of 2 bytes, or 16 bits, it can only store 2^16 numbers..."
and
" ... And since 2^16=65535, it can only hold that many numbers ... " - for unsigned int
I've also seen on some website that the maximum value that an int variable can hold is "2,147,483,647". Then I've been wondering a bit hard the relation between the number 65535 and the number 2,147,483,647.
I did some tests, and I saw that the maximum value that I can store is actually 2,147,483,647 in a int variable, so, what 65535 actually means then?
CodePudding user response:
The size of an int
is not necessarily the same on all implementations.
The C standard dictates that the range of an int
must be at least -32767 to 32767, but it can be more. On most systems you're likely to come in contact with, an int
will have range -2,147,483,648 to 2,147,483,647 i.e. 32-bit two's complement representation.
CodePudding user response:
16 bits can have a certain number of unique combinations (see here for an explanation of this). To figure out the number, you just need to raise 2 to the power of the number of bits. 2^16 is 65536. Since counting starts at zero, that means that 65535 is the maximum. That's for an unsigned integer though.
A signed integer uses one of the bits to determine if the number is positive or negative (this is called Two's Complement). Meaning there are only 15 bits with which to express the number. 2^15 is 32768, meaning that in the positive direction, the number can go from 0 to 32767. In the negative direction, the lowest the number can go is -32768. The total number of combinations is still 65536, but the bits just means something different in the case of a signed 16-bit integer.
With 32-bit integers, the logic is exactly the same. 2^32 is 4294967296, meaning the highest that a 32-bit number can go is 4294967295. There are exactly 2^32 combinations possible, but because it starts at zero, the highest number is exactly one less than 2^32. But if you reserve one bit for the sign, that means you can only go up to 2^31, minus one, which is 2147483648. So, using the same logic as for 16-bit numbers, we can figure out that the highest a signed 32-bit integer can go is 2147483647, and the lowest it can go is -2147483648.
As for which data types use 8, 16, 32 or 64 bits, that's platform-dependent. An int
these days is almost always 32 bits, but that's not always been the case. If you want a data type to have a guaranteed size, you have to specifically choose one with an explicit size. In C, this can be done with the types defined in stdint.h
. For instance, uint32_t
would be an unsigned integer with 32 bits, meaning that on any platform, you can guarantee that the highest it can go is 4294967295.