I wrote some c code to play around with float values in memory, but ended up getting some unexpected output from printf compiling with "gcc (GCC) 12.1.1 20220730" with -std=c11 option.
I have no idea why it's behaving like this and would like to know what's happening, if I'm doing something wrong and how do I get it to printf a float value as hex if it's possible without converting it to another type first?
Here is the code used and output of different runs.
Code:
#include <stdio.h>
int main()
{
float f3 = 1.1;
float f4 = 1.0;
unsigned char *t1 = &f3;
unsigned *t2 = &f3;
printf("P1: %x\n", t2[0]);
printf("P2: %x %x %x %x\n", t1[0], t1[1], t1[2], t1[3]);
printf("P3: %p\n", &f3);
printf("P4: %lx, %lx, %llx\n", &f3, f3, f4);
printf("T1: %f, %f, %lx, %lx\n", f3, f4, f4, f3);
printf("T2: %x, %lx\n", f4, f3);
printf("T3: %x, %lx\n", f3, f4);
return 0;
}
The main problem seems to be with printing a float as hex:
printf("%x\n", f3);
Output 1:
P1: 3f8ccccd // as expected
P2: cd cc 8c 3f // as expected
P3: 0x7ffc667d4d40 // as expected
P4: 7ffc667d4d40, 3ff19999a0000000, 0 // pointer value as expected, but second and third isn't. Values stay the same after each run
T1: 1.100000, 1.000000, 5556db09b2a0, 0 // first two values as expected, second and third isn't, Values do not stay the same after each run
T2: db09b2a0, 0 // this value keeps changing after each run
T3: db09b2a0, 0 // same as above, but should be different? Also changes after each run.
Output 2:
P1: 3f8ccccd
P2: cd cc 8c 3f
P3: 0x7ffef87ebb00
P4: 7ffef87ebb00, 3ff19999a0000000, 0
T1: 1.100000, 1.000000, 55fb6a1962a0, 0
T2: 6a1962a0, 0
T3: 6a1962a0, 0
Output 3:
P1: 3f8ccccd
P2: cd cc 8c 3f
P3: 0x7ffdb2026640
P4: 7ffdb2026640, 3ff19999a0000000, 0
T1: 1.100000, 1.000000, 564dd210c2a0, 0
T2: d210c2a0, 0
T3: d210c2a0, 0
CodePudding user response:
The main problem seems to be with printing a float as hex:
printf("%x\n", f3);
Yes, because the %x
format specifier expects an unsigned int
as an argument, but you're passing in float
which is being promoted to a double
.
Using the wrong format specifier triggers undefined behavior, which in this case manifests as strange output. As to what's happening under the hood, floating point values are typically passed to a function using floating point registers while integer values are typically pushed onto the stack.
This is also invalid:
printf("P1: %x\n", t2[0]);
As it causes a strict aliasing violation. This basically means you can't access the bytes of one type as if it were another type, unless the destination type is char
or unsigned char
.
The proper way to print the byte representation of a floating point type is to have an unsigned char *
point to the first byte, then loop through the bytes and print each one.
CodePudding user response:
In all the cases of "unexpected output", you are passing a parameter whose type does not match the format specifier. You are telling printf()
to expect one thing, but passing another. That is always undefined behaviour, and it is largely pointless to speculate how a specific output came about. Moreover passing a float
to printf()
promotes it to a double - so the representation you are trying to inspect will have changed from that of the original float
variable.
Generally to inspect the bits that represent the floating point values, you need to take the address of the float, cast that address to an integer pointer of the same width, then de-reference it.
For example:
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main()
{
float f1 = 1.0;
float f2 = 1.1;
void* p1 = &f1;
void* p2 = &f2;
printf( "f1: %f @%p = %" PRIx32 " (x x x x)\n",
f1, p1, *(uint32_t*)p1,
((uint8_t*)p1)[0],
((uint8_t*)p1)[1],
((uint8_t*)p1)[2],
((uint8_t*)p1)[3] ) ;
printf( "f2: %f @%p = %" PRIx32 " (x x x x)\n",
f2, p2, *(uint32_t*)p2,
((uint8_t*)p2)[0],
((uint8_t*)p2)[1],
((uint8_t*)p2)[2],
((uint8_t*)p2)[3] ) ;
return 0;
}
outputs:
f1: 1.000000 @0x7ffc794e1df0 = 3f800000 (00 00 80 3f)
f2: 1.100000 @0x7ffc794e1df4 = 3f8ccccd (cd cc 8c 3f)
(clearly the addresses will vary).
Of course if all you intend is to inspect the location and internal representation and byte order of specific variables, it is far simpler and less error prone to observe them in a symbolic debugger.