I am drafting a code from WSL1 and I am scratching the back of my head because I have a very confusing problem.
First, I allocate ~10 GB of RAM using malloc
int** megaarray;
megaarray=(int**)malloc( x*sizeof(int*) * y * count );
for(int i=0; i<x * count; i ){ // x = 3200, count =1000, both integers
megaarray[i]=(int*)malloc(y*sizeof(int) ); // y = 4200, integer
}
Allocation goes well, and I use that segment of RAM, then at the end of some computing I try to de-allocate:
for(int i=0; i<x * count; i ){ // x = 3200, count =1000, both integers
free(megaarray[i]);
}
free(megaarray);
I kept on getting a crash when the same function using the above is used the second time and I put a sleeper in between each allocation/de-allocation to find that de-allocation is simply not happening! What's going on?
CodePudding user response:
Such a weird behaviour, on WSL1, when count is 1000, free-ing doesn't quite work and code crashes.
On Full Linux x64 machine, freeing works from sampling, but the code crashes right away.
On WSL1, when count is brought down to 715, code works perfectly fine, on Full Linux, code runs perfectly but if and only if the count is brought down to 600.
I am guessing this has to do with buffer and max RAM allowed per application.
CodePudding user response:
Hmm... not quite limited by OS
> ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256523
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 256523
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
what the heck is going on?!
CodePudding user response:
Okay, I found the issue, it is the dumbest part on my side. I initialized to the wrong mallocation size:
megaarray=(int**)malloc( x*sizeof(int*) * y * count );
is actually 4 bytes times the memory I need. I would have ended up using 410003200*4200/1000000000 ~= 54GB!
Reducing count to 600: ~31GB, my systems have 32GB, Reducing count to 715: ~35GB, this should not have worked...
It should be actually:
megaarray=(int**)malloc( x * y * count );
and the code works throughout all of the platforms.
By getting rid of the premature integer pointer size declaration, I use ~13GB even with 1000 counts
Now, the questions left standing:
- How did this work at least half of the way on WSL?!
- Why is WSL1 not "freeing" RAM back to Windows when the memory is freed and why is it eating up more RAM instead of the same pre-allocated sections when allocating more RAM?