A book on C programming states,
enum corvid { magpie , raven , jay , chough , corvid_num , };
# define FLOCK_MAGPIE 1 U
# define FLOCK_RAVEN 2 U
# define FLOCK_JAY 4 U
# define FLOCK_CHOUGH 8 U
# define FLOCK_EMPTY 0 U
# define FLOCK_FULL 15 U
int main ( void ) {
unsigned flock = FLOCK_EMPTY ;
if ( something ) flock |= FLOCK_JAY ;
...
if ( flock & FLOCK_CHOUGH )
do_something_chough_specific ( flock ) ;
Here the constants for each type of corvid are a power of two, and so they have exactly one bit set in their binary representation. Membership in a flock can then be handled through the operators: |= adds a corvid to flock, and & with one of the constants tests whether a particular corvid is present
Question 1. What is the code doing? What's the purpose of declaring enum corvid?
Question 2. What does "Here the constants for each type of corvid are a power of two, and so they have exactly one bit set in their binary representation." mean?
CodePudding user response:
It becomes more obvious further down in the same text. The defines from your example are "hard-coded" (absolute values) constants used for bit masking. something & 1U
gives the first bit in a chunk of data, & 2U
gives the 2nd bit, & 4U
gives the 3rd bit and so on.
The book writes them with decimal notation for some reason, but it is custom to write bitmasks with hex notation, because that's what they really are and it's the form easiest to understand. something & 0xFU
masks out the 4 lowest bits, for example.
Now further down in the same chapter, the book swaps the hard coded constants for computed ones based on the enum. #define FLOCK_MAGPIE (1U << magpie )
and so on. The advantage of this is that the enum can be modified and then the bit masks will get updated accordingly. Defines using shift like this is perhaps the most common form to define bit masks, with 1U << n
giving bit number n. Note that the first bit in any binary is called bit 0, so 1U << 0
gives the first bit.
Also note that all these expressions are integer constant expressions, meaning they are calculated at compile-time and gets replaced by a constant in your executable, so that your program doesn't need to calculate them in run-time.
CodePudding user response:
For question 2:
Since each number in the enum is a power of 2, there is only one bit set in its binary representation. For example 2 = 00000010
and 16 = 00010000
.