Home > database >  Why, decimal integer constants are signed and hexadecimal counterparts can be both signed and unsign
Why, decimal integer constants are signed and hexadecimal counterparts can be both signed and unsign

Time:11-23

I'm reading Modern C, by Jens Gustedt and author points out the following,

  • All value have a type that is statically determined.

-At the starting of the chapter author said " C programs primarily reason about values and not about their representation ". So are literals, say decimal integer constant ways to represent a value or they have an intrinsic value.

  • Also stated was " We don't want result of computation to depend on executable which is platform specific but ideally depend only on program specification itself. An IMPORTANT STEP TO ACHIEVE THIS PLATFORM INDEPENDENCE IS THE CONCEPT OF TYPES ". What does the text in uppercase actually mean, how do types help in platform independence. What is the use of types?

  • Why, decimal integer constants are signed and hexadecimal counterparts can be both signed and unsigned, even though they refer to same set of values?

I'm really confused at this point. If someone could answer each point and elaborate, I'll be grateful.

CodePudding user response:

Why, decimal integer constants are signed and hexadecimal counterparts can be both signed and unsigned, even though they refer to same set of values?

This appears to allude to C 2018 6.4.4.1 5, which specifies the type of an integer constant. For a decimal constant with no suffix, the candidate types are all signed: int, long int, and long long int. (The choice from among these depends on the value of the constant.) For a hexadecimal constant with no suffix, the candidate types are a mix of signed and unsigned: int, unsigned int, long int, unsigned long int, long long int, and unsigned long long int.

This specification of candidate lists is not based on the values that each can represent, since, as you point out, decimal and hexadecimal notations can both represent any integer value. It is simply based on common use. Use of decimal constants fell largely into use with signed types (e.g., decimal constants were often used when doing general arithmetic) and use of hexadecimal constants was more diverse (e.g., hexadecimal constants were often used when working with bits), and presumably the C committee felt these rules suited the existing usage.

CodePudding user response:

Why, decimal integer constants are signed and hexadecimal counterparts can be both signed and unsigned, even though they refer to same set of values?

You are kind of misunderstanding this part. It's really about the type of the constant. It's not about the signedness.

So assuming 32 bit int (2's complement) the following apply:

0x7ffffff has type int
0x8000000 has type unsigned int

The reason is that 0x80000000 is greater than INT_MAX and therefore can't be an int.

This is important because of the way usual arithmetic conversions work. In case of 0x7fffffff the conversion will be towards int while in case of 0x80000000 the conversion will be towards unsigned int.

To put it in other word:

0x.... is always a value greater or equal to zero. If it's value fits into the range of int, its type will be an int. If it can't fit into int the the next step is to see if it fits into unsigned int in which case its type will be unsigned int and so on for long int, unsigned long int, long long int, unsigned long long int

An example (32 bit 2's complement):

The binary representation 0x80000000 as int represents the value -2147483648.

The binary representation 0x7fffffff as int represents the value 2147483647.

So from this it must be true that 0x7fffffff > 0x80000000 is true. Right?

Try:

int main(void) 
{
    if (0x7fffffff > 0x80000000)
    {
        puts("0x7fffffff is bigger than 0x80000000");
    }
    else
    {
        puts("0x7fffffff is less than 0x80000000");
    }

    return 0;
}

So this "should" print: 0x7fffffff is bigger than 0x80000000 Right?

No, it wont. It prints 0x7fffffff is less than 0x80000000. The reason is (again) that 0x80000000 is considered unsigned int so the comparison is done on unsigned types. This shows why the type of integer constants can be important.

Finally try:

int main(void) 
{
    if (((0x7fffffff - 0x40000000) - 0x40000000) > 0)
    {
        puts("(0x7fffffff - 0x40000000) - 0x40000000) is bigger than zero");
    }
    else
    {
        puts("(0x7fffffff - 0x40000000) - 0x40000000) is less than zero");
    }
    return 0;
}

Output:

(0x7fffffff - 0x40000000) - 0x40000000) is less than zero

This shows that the expression (0x7fffffff - 0x40000000) - 0x40000000) was calculated as int (i.e. had any operand been unsigned, result could not be less than zero).

  • Related