#include <stdio.h>
int main(void) {
int x = 5;
int y = &x;
printf("Signed Value of Y: %d \n", y);
printf("Unsigned Value of Y: %u", y);
return 0;
}
Since y is of int type, using %d
gives a possibly-signed output, whereas %u
gives an unsigned output. But y is of int type, so why does %u
give unsigned output? Is there an implicit type conversion?
CodePudding user response:
Effectively, a printf
call is two separate things:
- All the arguments are prepared to send to the function.
- The function interprets the format string and its other arguments.
In any function call, the arguments are prepared according to rules involving the argument types and the function declaration. They do not depend on the values of the arguments, including the contents of any string passed as an argument, and this is true of printf
too.
In a function call, the rules are largely (omitting some details):
- If the argument corresponds to a declared parameter type, it is converted to that type.
- Otherwise (if the argument corresponds to the
...
part of a function declaration or the called function is declared without specifying parameter types), some default promotions are applied. For integers, these are the integer promotions, which largely (omitting some details) convert types narrower thanint
toint
. For floating-point,float
is promoted todouble
.
printf
is declared as int printf(const char * restrict format, ...);
, so all its arguments other than the format string correspond to ...
.
Inside printf
, the function examines its format string and attempts to perform the directives given in the format string. For example, if a directive is %g
, printf
expects a double
argument and takes bits from the place it expects a double
argument to be passed. Then it interprets those bits as a double
, constructs a string according to the directive, and writes the string to standard output.
For a %d
or %u
directive, printf
expects an int
or an unsigned int
argument, respectively. In either case, it takes bits from the place it expects an int
or an unsigned int
argument to be passed. In all C implementations I am aware of, an int
and an unsigned int
argument are passed in the same place. So, if you pass an int
argument but use %u
, printf
will get the bits of an int
but will treat them as if they were the bits of an unsigned int
. No actual conversion has been performed; printf
is merely interpreting the bits differently.
The C standard does not define the behavior when you do this, and a C implementation would be conforming to the standard if it crashed when you did this or if it processed the bits differently. You should avoid it.
CodePudding user response:
"Re: But y is of int type, So why does %u give unsigned output?"
From C11:
If a conversion specification is invalid, the behavior is undefined. If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined.
where,
undefined — The behavior for something incorrect, on which the standard does not impose any requirements. Anything is allowed to happen, from nothing, to a warning message to program termination, to CPU meltdown, to launching nuclear missiles (assuming you have the correct hardware option installed).
— Expert C Programming.
CodePudding user response:
Is there an implicit type conversion?
Sort of. A function such as printf
that accepts a variable number of arguments does not automatically know the number of variable arguments it actually receives on any call, or their types. Conversion specifications such as %d
and %u
collectively tell printf()
how many variable arguments to expect, and individually they tell printf
what type to expect for each argument. printf()
will try to interpret the argument list according to these specifications.
The C language specification explicitly declines to say what happens when the types of printf
arguments do not correspond properly to the conversion specifications in the accompanying format string. In practice, however, some pairs of data types have representations similar enough to each other that printf()
's attempt to interpret data of one type as if it were the other type is likely (but not guaranteed) to give the appearance of an implicit conversion from one type to the other. Corresponding signed and unsigned integer types are typically such pairs.
You should not rely on such apparent conversions actually happening. Instead, properly match argument types with conversion specifications. Correct mismatches by choosing a different conversion specification or performing an appropriate explicit type conversion (a typecast) on the argument.