Home > Enterprise >  Why there is a difference between the number of bits given for a constant and a variable?
Why there is a difference between the number of bits given for a constant and a variable?

Time:04-15

The size of a constant float is 8 bytes while a variable float is just 4 bytes.

#include <stdio.h>
#define x 3.0
int main(){
    printf("%d", sizeof(x));
    return 0;
}

This also applies for a constant char (gives 4 bytes) while a char variable just gives 1 byte.

CodePudding user response:

I think this question has already been answered in a couple of previous posts. The basic idea is:

A) Consider this program in C:

#include <stdio.h>
#define x 3.0      /* without suffix, it'll treat it as a double */
#define y 3.0f     /* suffix 'f' tells compiler we want it as a float */

int main() {
    printf("%ld\n", sizeof(x)); /* outputs 8 */
    printf("%ld", sizeof(y)); /* outputs 4 */
    return 0;
}

Basically, double has more precision that float so it's more preferable in case of ambiguity. So, if you're declaring constant without suffix 'f', it'll treat it as a double.

B) Look into this one now:

#include <stdio.h>
#define x 'a'

int main() {
    char ch = 'b';
    printf("%ld\n", sizeof(x)); /* outputs 4 */
    printf("%ld", sizeof(ch)); /* outputs 1 since compiler knows what it's 
                                  exactly after we declared & initialized var 
                                  ch */
    return 0;
}

That constant value ('a') is converted to it's ASCII value (097) which is a number literal. Hence, it'll be treated as an integer. Please refer this link for more details: https://stackoverflow.com/questions/433895/why-are-c-character-literals-ints-instead-of-chars#:~:text=When the ANSI committee first,of achieving the same thing.

  • Related