This is part of a pixel blending operation to enhance precision.
typedef unsigned uint;
uint h(uint x) {
x &= 0xff;
return x << 8 | x;
}
uint g(uint x, uint y) {
return h(x) * h(y) >> 24;
}
I looked at the compiler output, and found a very interesting line.
g:
movzx edi, dil
movzx esi, sil
imul esi, edi
imul eax, esi, 66049 # <---
shr eax, 24
ret
This can be decompiled as,
uint g_(uint x, uint y) {
return (x & 0xff) * (y & 0xff) * 66049 >> 24;
}
I couldn't understand how multiplying by 66049
can produce the desired result. What is the math behind it?
CodePudding user response:
66049 is 0x10201 hexadecimal. That's the big clue, since:
(x << 8 | x) = (x * 0x101)
(assuming 0 ≤ x ≤ 0xFF), then (x << 8 | x) * (y << 8 | y)
is the same as x * y * 0x101 * 0x101
.
The 0x101 * 0x101
can be reduced to 0x10201
- there's your magic constant.
So why is x * 0x101
the same as (x << 8 | x)
? That should be easy to see with a decimal example. If you multiply any number 0-99 by 101, it duplicates the digits - for example, 32 * 101 = 3232. This is because multiplying by 100 effectively just adds 0s at the end (i.e. 32 * 100 = 3200) and the extra 1 just adds the original value (32 * 101 = 32 * 100 32 * 1 = 3200 32 = 3232). The same principle applies in hexadecimal (or in binary).