If I need a program to r/w data larger then 1T randomly, a simplest way is putting all data into memory. For the PC with 2G memory, we can nevertheless work by doing a lot of i/o. Different computers have different size memory, so how to alloc suitable memory on computers from 2G to 2T memory in one program?
I thought of popen /proc/meminfo
and alloc MemFree
but I think it maybe have a better way.
note:
- use Linux, but other OS is welcome
- avoid being OOM killed as well as possible (without root)
- disk i/o as less as possible
- use multiprocessing
- c or c answer is fine
CodePudding user response:
You can use the GNU extension get_avphys_pages()
from glibc
The get_phys_pages function returns the number of available pages of physical the system has. To get the amount of memory this number has to be multiplied by the page size.
Sample code:
#include <unistd.h>
#include <sys/sysinfo.h>
#include <stdio.h>
int main() {
long int pagesize = getpagesize();
long int avail_pages = get_avphys_pages();
long int avail_bytes = avail_pages * pagesize;
printf( "Page size:%ld Pages:%ld Bytes:%ld\n",
pagesize, avail_pages, avail_bytes );
return 0;
}
Result Godbolt
Program returned: 0
Page size:4096 Pages:39321 Bytes:161058816
This is the amount of PHYSICAL memory in your box so:
The true available memory can be much higher if the process pages in/out
The physical memory is a maximum as there would be other processes using memory too.
So treat that result as an estimate upper bound for available DDR.
If you plan to allocate large chunks of memory use mmap() directly as malloc() would be too high level for this usage.