I found a nasty bug in our C iOS application, which I suspect to be caused by a compiler bug on ARM based Apple Clang.
I was able to reproduce the bug in a MRE on a Mac M1 machine.
#include <cstdio>
int main(int argc, const char** argv)
{
int total = 0;
for(double a=1000; a<10000; a*=1.1)
{
unsigned char d = a / 0.1;
total = d;
}
printf("Total: %d\n", total);
}
Compiled without optimization, the test program always produces the same output:
% ./a.out
Total: 3237
% ./a.out
Total: 3237
% ./a.out
Total: 3237
However, when compiling with optimization, the resulted number seems like random:
% clang -O3 test.cpp
% ./a.out
Total: 74841976
% ./a.out
Total: 71057272
% ./a.out
Total: 69828472
The Apple Clang version is 13.0:
% clang --version
Apple clang version 13.0.0 (clang-1300.0.29.30)
Target: arm64-apple-darwin21.3.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
I believe the program does not have undefined behavior. So my questions:
- Is that really a compiler bug?
- Is the behavior also wrong on original (not Apple) Clang?
- Should I fill in a bug report?
CodePudding user response:
Your code does have undefined behavior. When you do
unsigned char d = a / 0.1;
you are doing floating point to integer conversion which means [conv.fpint]/1 applies and it states:
A prvalue of a floating-point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.
emphasis mine
so once a / 0.1
exceeds the max value of an unsigned char
you have undefined behavior.
CodePudding user response:
It's an often forgotten rule, but the behaviour on converting a floating point to an out of range integral type is undefined.