I have to save the number of non zero entries in a matrix with dimensions that could be
as big as uint64_t
x uint64_t
resulting in a 128 bit
value.
Im not sure which data-type would be right for this variable in C
as it would require 128 bits (unsigned).
I would use __int128
as a data type but my problem is that when I test the max. supported data type on my system with
#include <stdio.h>
#include <stdint.h>
int main() {
printf("maxUInt: %lu\n", sizeof(uintmax_t));
printf("maxInt: %lu", sizeof(intmax_t));
}
It gives the following result:
maxUInt: 8
maxInt: 8
meaning that 8 Bytes is the maximum for number representation.
So this is troubling me as the result is possibly 128 bits == 16 Bytes big.
Will __int128
still work in my case?
CodePudding user response:
We're talking about the size of an array, so uintmax_t
and intmax_t
are irrelevant.
malloc()
accepts a size_t
. The following therefore computes the limit of how much you can request:
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
printf( "2^( %zu * %d ) bytes\n", sizeof( size_t ), CHAR_BIT );
For me, that's 2^( 8 * 8 ) octets.
18,446,744,073,709,551,616
But I'm an an x86-64 machine. Those don't support nearly that much memory. The instruction set only supports 2^48 octets of memory.
281,474,976,710,656 (1/65,536 of what 64 bits can support)
But no x86-64 machine supports that much. Current hardware only supports 2^40 octets of memory.
1,099,511,627,776 (1/16,777,216 of what 64 bits can support)
So unless you have some very special hardware, 64 bits is more than enough to store the size of any array your machine can handle.
Still, let's answer your question about support for __int128
and unsigned __int128
. These two types, if supported, are an extension to the standard. And they are apparently not candidates for intmax_t
and uintmax_t
, at least on my compiler. So checking the size of intmax_t
and uintmax_t
is not useful for detecting their support.
If you want to check if you have support for __int128
or unsigned __int128
, simply try to use them.
__int128 i = 0;
unsigned __int128 u = 0;
If both uintmax_t
and unsigned __int128
are too small, you can still use extended precision math, such as by using two 64-bit integers in the manner showed in Maxim Egorushkin's answer.
CodePudding user response:
One portable option is to construct a counter out of multiple smaller units:
typedef struct BigCounterC {
uint64_t count_[2];
} BigCounterC;
void BigCounterC_increment(BigCounterC* counter) {
// Increment the higher units when the lower unit of unsigned type wraps around reaching 0.
for(size_t n = sizeof counter->count_ / sizeof *counter->count_; n-- && ! counter->count_[n];);
}
int main() {
BigCounterC c2 = {}; // Zero-initialize.
BigCounterC_increment(&c2);
return 0;
}
C version:
#include <cstdint>
#include <type_traits>
template<class Unit, size_t N>
struct BigCounter {
static_assert(std::is_unsigned_v<Unit>); // Unsigned overflow is well defined.
Unit count_[N] = {}; // Zero-initialize.
BigCounter& operator () noexcept {
// Increment the higher units when the lower unit of unsigned type wraps around reaching 0.
for(auto n = N; n-- && ! count_[n];);
return *this;
}
};
int main() {
BigCounter<uint64_t, 2> c;
c;
}