Home > Back-end >  Hardcoded data storage in memory and usage in bitwise operations (C)
Hardcoded data storage in memory and usage in bitwise operations (C)

Time:06-02

I am reworking a C project for ARM and DSP code (I'm relatively new to ARM, DSP and C... talk about a disaster... lol) and found a piece of code where the developers implemented bitwise operations. I'm uncertain, but it seems to me that they might not have achieved what they set out to achieve.

They created a uint32 for statuses, which I assume the intent was to only use 32 bits for 32 bool values, however they then went and created 32 variables, each with the bit value of the variables they are trying to create. Here is a shortened example

// The status variable
uint32 STATUSES;
    
// The hard-coded values
const uint32 ACTIVE = 0x00000001;
const uint32 OPEN = 0x00000002;
const uint32 RUNNING = 0x00000003;
// etc.

They then proceed to check the status of the STATUSES variable by doing the following

if (STATUSES & ACTIVE)
{
    //  do something
}

I understand the bitwise part. If the bit is active in STATUSES then the value would evaluate to true, because bitwise AND is true only when both bits are true.

My assumption is that the purpose of using bitwise operations, is to reduce the memory footprint of the program, however doesn't the fact that they created constant variables to store the comparison values negate the whole point of doing bitwise in the first place?

By creating a status variable, and then creating constants, aren't they using more memory than if they simply used 32 bools? (bool = 1 byte, int = 2 to 4 bytes, not sure what it is in this system)

Also, if they wanted to use bitwise operations and truly save the space, would it not have been better to do something like the following

// The status variable
uint32 STATUSES;
    
// Constants to compare against
uint32 ACTIVE() { return 0x00000001; }
uint32 OPEN() { return 0x00000002; }
uint32 RUNNING() { return 0x00000003; }
etc.

I know in OOP-based programming memory is allocated as follows:

  • local variable (primitives) -> stack
  • new keyword (objects) -> heap
  • hard coded -> ??? not sure

The code is for a scientific instrument so we need to go as fast as possible while using as little memory as possible. Is their method faster than the method-calling option I proposed? Which memory is used to store the hard-coded comparison method I wrote?

Any insights into why they might have done their implementation the way they did would be greatly appreciated.

p.s. None of the people who were involved in creating the software are around any more.

CodePudding user response:

which I assume the intent was to only use 32 bits for 32 bool values

No, it rather looks like the intent was to create various bit masks. RUNNING = 0x00000003 cannot be regarded as a bool but rather as a bit mask consisting of ACTIVE | OPEN.

Notably, it's sensless to create your own local (non)standard uint32 type when the C standard already defines uint32_t in stdint.h. Inventing your own local (non)standard is nothing but bad practice - all you achieve is reduced readability and portability. (In case your code base pre-dates year 1999 then that would also explain why no standard types were used.)

however doesn't the fact that they created constant variables to store the comparison values negate the whole point of doing bitwise in the first place?

In embedded systems, numeric constants will either get stored together with the program code or in a special .rodata segment - in either case it consumes flash and not RAM. If the intent is to execute from RAM then indeed there's likely no memory saved. Also, depending on hardware, grabbing values from flash might be more expensive than grabbing them from RAM.

You might find this post regarding where C variables end up in microcontrollers useful: What resides in the different memory types of a microcontroller? I have limited experience of DSPs too, but they generally have much more in common with microcontrollers than with PC. You might want to locate your "map file", generated by the linker, to see in detail where everything allocated ended up.

Also in case your DSP works best with 32 bit types, then allocating everything in 32 bit chunks is a speed over memory optimization.

uint32 ACTIVE() { return 0x00000001; } etc is nonsense, it will just get optimized back to 0x00000001 as the compiler will inline the function.

Please note that typing out hex literals like 0x00000001 without u suffix is dangerous practice. This does not mean that they are 32 bit wide, but rather that they have type (signed) int up to value 0x80000000 which is suddenly type unsigned int.

CodePudding user response:

When you use a global const value (initialized in the declaration, with internal linkage), the compiler will put the value directly into the code instead of referencing its storage location.

As long as you don't take the address of those variables, no storage will be allocated for them at all.

  • Related