I have the following code that does a really basic conversion from 16bpp image to a 1bpp image, the code functions as expected until I enable compiler optimizations, at which point I just get a black image.
#define RSCALE 5014709
#define GSCALE 9848225
#define BSCALE 1912602
uint16_t _convertBufferTo1bit(uint8_t* buffer, uint16_t size)
{
uint8_t* dst_ptr = buffer;
uint8_t* end_ptr = buffer size;
uint16_t pos = 0;
uint8_t r, g, b, i;
uint32_t lum;
while(buffer < end_ptr)
{
for(i = 8; i > 0; i--)
{
r = (*buffer & 0xF8);
g = ((*buffer & 0x07) << 5);
buffer = 1;
g |= (*buffer & 0x03);
b = ((*buffer & 0x1F) << 3);
buffer = 1;
lum = ((RSCALE * r) (GSCALE * g) (BSCALE * b));
if(lum > 0x7FFFFFFF)
{
//White
dst_ptr[pos] |= (1 << (i-1));
}
else
{
//black
dst_ptr[pos] &= ~(1 << (i-1));
}
}
pos ;
}
return pos;
}
When looking at the decompiled assembly I can see that the if(lum > 0x7FFFFFFF)
statement and all associated calculations have been removed by the compiler. Can someone help me understand why?
-O0 -std=c 17 -Wall -Wextra
https://godbolt.org/z/GhPezzh33
-O1 -std=c 17 -Wall -Wextra
https://godbolt.org/z/bn1M4319h
CodePudding user response:
In this code:
lum = ((RSCALE * r) (GSCALE * g) (BSCALE * b));
if(lum > 0x7FFFFFFF)
RSCALE
, GSCALE
, and BSCALE
are 5014709
, 9848225
, and 1912602
, respectively. Assuming int
is 32 bits in the C implementation being used, these are all int
constants.
r
, g
, and b
are all of type uint8_t
, so they are promoted to int
in the multiplications.
Then ((RSCALE * r) (GSCALE * g) (BSCALE * b))
is a calculation entirely with int
operands producing an int
result. So the compiler can see that lum
is assigned the value of an int
result, and it is entitled to assume the result is in the range INT_MIN
to INT_MAX
. Further, it can see all operands are nonnegative, and therefore negative results are not possible, reducing the possible range to 0
to INT_MAX
. This excludes the possibility that assigning a negative value to a uint32_t
will cause wrapping to a high value. So the compiler may assume lum > 0x7FFFFFFF
is never true.
The calculation may overflow int
, and then the behavior is undefined, and the compiler is still allowed to use the assumption.
To correct this, change at least one operand of each multiplication to unsigned
.