I want to write a program that needs 40-bit integers. The machine where I'm writing it has 64 bit integers, but I'd like to check inside the program whether there are 64-bit integers available.
How could I do that efficiently?
On a 64-bit machine, this seems to work:
~0 >= 2**63
Is that safe (read: "portable") on different architectures?
Thinking about the problem, I also wondered if the Perl compiler, or interpreter could make these results questionable for a future version of Perl:
DB<2> sub bittest { use integer; return ((1 << $_[0]) >> $_[0]) != 0; }
DB<3> x bittest 31
0 1
DB<4> x bittest 63
0 1
DB<5> x bittest 64
0 ''
CodePudding user response:
~0 is 18446744073709551615 on 64-bit systems, so
~0 == 18446744073709551615
~0 == 0xFFFFFFFFFFFFFFFF # (16 F's)
are efficient tests to see if you are on a 64-bit system.
I've only ever used Perl on 32-bit and 64-bit systems, but in case there are ever 39 and 41-bit systems, to use 40-bit integers you just need ~0 to be at least 2**40 - 1, or:
~0 >= 0xFFFFFFFFFF # (10 F's)
CodePudding user response:
If you need integers in the computer sense (IV/UV), you need them to at least 40 bits in size.
~0 >= 2**40-1
or
use Config qw( %Config );
$Config{uvsize} >= 8
uvsize
refes to the size in bytes of such integers.
You need this if you use any of the following:
- the numbers as operands to bitwise operators
- the numbers as operands to
..
/...
pack 'Q'
/unpack 'Q'
- hex literals larger than 0xFFFF_FFFF
If you need integers in the mathematical sense, floats (NV) with at least 40 bits of precision would also do.
use Config qw( %Config );
$Config{uvsize} >= 8 || eval($Config{nv_overflows_integers_at}) >= 2**40
Every integer up and including eval($Config{nv_overflows_integers_at})
can be represented exactly as an NV.
Note that every build of Perl should support at least 53 bits of precision.
For example, eval($Config{nv_overflows_integers_at})
evalutes to 9007199254740992
on my builds, which corresponds to 53 bits as per log(9007199254740992)/log(2)
.