Home > Back-end >  Macro defines for bit masks best practices
Macro defines for bit masks best practices

Time:04-21

I lately have been using these type of defines in my code:

#define FLAG_1 1U
#define FLAG_2 (1U << 1)
#define FLAG_3 (1U << 2)
...

To kind of improve readibilty instead of writing direct hex or decimal values, and I was wondering wether I am causing my program to actually have to compute bitwise operations on each FLAG_i read or if this is something a modern C compiler (gcc 9.4.0) takes care of and just precomputes the macro value when there are no variables involved.
Any suggestion, personal opinion or insight on the matter is appreciated.

CodePudding user response:

I was wondering wether I am causing my program to actually have to compute bitwise operations on each FLAG_i

No, these are so called integer constant expressions. They are always evaluated at compile-time. If you check the generated machine code you'll notice that they've been replaced with a fixed constant, in this case 0x01, 0x02 or 0x04.

Writing bit masks as named constants through macros like you do is fine and common practice.

CodePudding user response:

Writing macro is fine. Here is another solution with some advantages:

typedef enum {
    FLAG_1 = 1U,
    FLAG_2 = (1U << 1),
    FLAG_3 = (1U << 2)
} MyFlag;

This declares the constants which are evaluated at compilation time, and verifies that all values are different.

It creates a new type MyFlag, which allows the compiler to check the types of the variables, e.g.:

void print_flag (MyFlag f)
{
    switch (f) {
        case FLAG_1 : printf("FLAG_1\n"); break;
        case FLAG_2 : printf("FLAG_2\n"); break;
        case FLAG_3 : printf("FLAG_3\n"); break;
    }
}

print_flag (FLAG_2);

It will protect the function from being called with an unexpected value. Moreover, in the switch, the compiler will complain if a flag is missing when there is no default case.

  •  Tags:  
  • c
  • Related