Home > front end >  Linux, process ends before i can read its peak memory usage
Linux, process ends before i can read its peak memory usage

Time:05-16

I have to make a tool like those 'online judges', to compile and execute c/cpp code and return some results, but also measure if it exceeds time limit or a memory limit.

I found multiple linux commands that use a process's PID, and i was able to get the pid of the process after i start the executable, but the process would finish before those linux commands that read memory usage, and so it would throw an error.

I also found valgrind, but it takes too much time to finish.

Is there a way to start an executable resultet from a given c/cpp file, and make it not end until i tell it to, so that i have time to read its memory usage???

A bash/cpp/c solution thkx.

CodePudding user response:

If only concern is that a memory limit is not exceeded, then, instead of reading the peak memory usage, it is simpler to run the program into a properly constraint CGroup.

Both options below, use the same underlying mechanism (CGroups).

Let's say your program is ./a.out.

option 1 (simpler): Run a container with memory limit

This option needs docker to be installed.

Run into a temporary container (e.g. based on busybox):

docker run --rm -it -v $PWD/:/work -w /work --memory=64m busybox ./a.out 

option 2: Create a custom cgroup and run process into it

This option requires cgroup-tools must be installed (in Debian-based systems, maybe named differently in other distributions).

Create a cgroup (as supersuser) for a normal user. For example, create a cgroup test1 for resource controller memory. This group is administered by user user1 (so user1 can both set limits (-a) and run tasks (-t) inside the cgroup):

sudo cgcreate -a user1:user1 -t user1:user1 -g memory:test1

As a normal user user1 set some limits using the /sys/fs/cgroup hierarchy:

echo 64m > /sys/fs/cgroup/memory/test1/memory.limit_in_bytes

Run a program inside a cgroup using cgexec:

cgexec -g memory:test1 ./a.out

CodePudding user response:

Have the C/C code send itself the STOP signal with a kill() function call right before it exits. This will freeze the process. You can then do whatever you need to in your script and then send the process the CONT signal to un-freeze it.

CodePudding user response:

Use the /usr/bin/time command with a custom output format for example:

/usr/bin/time -f "***** Maximum RSS = %M kB *****" COMMAND ARGS...

The last line of output will be your custom string containing the Maximum 'Resident Set Size'. Note, however, that RSS is not the same as total memory. Check here for an explanation: Wikipedia

  • Related