If I compile following code
size_t a = -1;
with MSVC with W4 option I get
warning C4245: 'initializing': conversion from 'int' to 'size_t', signed/unsigned mismatch
and I am not 100% sure that -1
is 0xFFFFFFFF
on all the platforms. Is -1
bit representation defined by the standard?
Other options are:
size_t a = std::numeric_limits<size_t>::max();
size_t a = static_cast<size_t>(-1);
Are there some other alternatives?
It can be uint8_t
, uint16_t
, etc...
And in the code above I do not know what is the actual size of size_t
, it can be 4 bit or 8 bit, for example.
CodePudding user response:
To set all bits and silence the warning, without a cast, for all unsigned types:
a = 1;
a = -a;
or
a = 0;
a = ~a;
If you need a const or constexpr value (and really don't want to use casts):
constexpr size_t a = [] { size_t a = 0; return ~a; }();
CodePudding user response:
size_t a = -1;
This will initialize a
with the biggest value size_t
can hold. This is defined in terms of modulo arithmetics and not bit patterns. So this is true regardless if signed integers use 2s complement or something else.
Unsigned integers are required to be encoded directly as their binary representation so the largest value will always have the 0xFF...FF
bit pattern.
To silence the cast both of your solution work. It's just a matter of personal taste which one you use.
CodePudding user response:
Is -1 bit representation defined by the standard?
The signed bit representation doesn't matter. What matters is how the conversion from signed to unsigned is specified in the standard.
The result of converting -1 to an unsigned integer always result in largest representable value of the unsigned type. It's safe to assume that the bit representation of largest representable unsigned integer is all ones.
An assumption that isn't portable to all systems is that std::size_t
is 64 bits wide. It's entirely feasible that a == 0xFFFF
on another system.