Is there a way to set every nth bit in an integer without using a for loop?
For example, if n = 3
, then the result should be ...100100100100
. This is easy enough with a for loop, but I am curious if this can be done without one.
--
For my particular application, I need to do this with a custom 256-bit integer type, that has all the bit operations that a built-in integer has. I'm currently using lazily initialized tables (using for loops) and that is good enough for what I'm doing. This was mostly an exercise in bit-twidling for me, but I couldn't figure out how to do it in a few steps/instructions, and couldn't easily find anything online about this.
CodePudding user response:
… I need to do this with a custom 256-bit integer type.
Set r
to 256 % n
.
Set d
to ((uint256_t) 1 << n) - 1
. Then the binary representation of d
is a string of n
1 bits.
Set t
to UINT256_MAX << r >> r
. This removes the top r
bits from UINT256_MAX
. UINT256_MAX
is of course 2256−1. This leaves t
as a string of width-r
1 bits, and width-r
is some multiple of n
, say k*n
.
Set t
to t/d
. As a string of k*n
1 bits divided by a string of n
1 bits, this produces a quotient that is 000…0001
repeated k
times, where each 000…0001
is n-1
0 bits followed by one 1 bit.
Now t
is the desired bit pattern except the highest desired bit may be missing if r
is not zero. To add this bit, if needed, OR t
with t << n
.
Now t
is the desired value.
Alternately:
Set t
to 1.
OR t
with t << n
.
OR t
with t << 2*n
.
OR t
with t << 4*n
.
OR t
with t << 8*n
.
OR t
with t << 16*n
.
OR t
with t << 32*n
.
OR t
with t << 64*n
.
OR t
with t << 128*n
.
Those shifts must be defined (shifting by zero would suffice) or suppressed when the shift amount exceeds the integer width, 256 bits.