Both GCC and Clang support an implementation-defined function called __builtin_parity
that helps determine the parity of a number.
According to what GCC states:
Built-in Function: int __builtin_parity (unsigned int x)
Returns the parity of x, i.e. the number of 1-bits in x modulo 2.
This means that if the number of 1-bits is even, it will return 0 and 1 if odd.
The same goes for Clang as I tested on the Compiler Explorer.
However, the actual parity flag is set when the number of set bits is even.
Why is it so?
CodePudding user response:
They're just different arbitrary choices.
First note that "the actual parity flag" is a hardware feature only provided on some architectures; of architectures currently in mainstream use, I think x86 is the only one with such a flag. So the very existence, let alone the exact semantics, of such a flag, are not in any way a universal standard.
I think GCC's choice is more logical: 0 and 1 should correspond to even and odd respectively, because 0 is an even number and 1 is odd. I don't know why x86 and its predecessors chose to do the opposite. You would probably have to travel back in time and ask the designers.
Anyway, the actual value of the 8086 parity flag is not very important; programmers would normally test it using JPE
and JPO
assembler mnemonics, that let you just specify "jump if parity even" or "jump if parity odd" without having to remember which one corresponded to a 0 or 1 bit in the flag. The value would only become relevant if you wanted to actually inspect the bit in the FLAGS register via PUSHF
or LAHF
, which would be useful only in very obscure circumstances.
I looked at the history a little. The Intel 8086 copied its flags from the 8080, which does it the same. Its predecessor, the 8008, also had a parity "flip-flop", which it seems was set on even parity, but it's a little unclear because you could only jump conditionally on the state of the flip-flop, not actually read it. The 8008 is said to have been derived from the Datapoint 2200, which actually documents its parity flip-flip the opposite way: set for odd, reset for even. But the 80xx semantics could have been some internal implementation detail without any deep significance, like the parity circuitry just happened to produce the result that way, and they didn't bother to add another NOT gate to invert it. Any further investigation is probably more on topic for Retrocomputing.SE.
The x86 parity flag is only marginally useful for GCC's __builtin_parity()
anyway, because it only tests one byte. It can be used for a larger value by xor'ing its bytes together, and GCC/clang will do this if there's no other option. It handles the reversed sense of the flag by using setnp
instead of setp
at the end (a human programmer would just have written setpo
and not had to think about the set/clear value of the flag).
However nearly all x86 CPUs from the last 10 years support the popcnt
instruction, and GCC/clang will use this instead if it's available (and then just extract the low bit).