On Linux, if vm.overcommit_memory=1
, you can allocate huge memory blocks, but if you use less, these will not affect memory usage.
Lets suppose following code:
const size_t size = 128;
void p = malloc(size);
process(p, size); // use up to size bytes
vs
const size_t HUGE_SIZE = 1ull * 1024ull * 1024ull * 1024ull; // 1 GB
const size_t size = 128;
void p = malloc(HUGE_SIZE);
process(p, size); // use up to size bytes
memory usage in both case will be "similar" (OK, may be 4 KB in second case, vs 128 bytes in first case)
- is second approach really takes 4 KB?
- is second approach slower?
- what if I have several 1000's blocks of 1 GB?
what if I often allocate / deallocate these several 1000's blocks? - any more disadvantages I can not see?
- I read MacOS support the same, any difference there?
CodePudding user response:
is second approach really takes 4 KB?
In both cases it takes physical memory as much as accessed by process()
, with one page granularity. Difference is process address space allocation.
is second approach slower?
It may be slower on searching for appropriate region in process address space.
what if I have several 1000's blocks of 1 GB?
On 32-bit system process address space is limited by 32 bits, so it will fail. On 64-bit you'll have a lot of process address space allocated.
what if I often allocate / deallocate these several 1000's blocks?
You'll bother glibc allocator.