I have a list of PIDs of processes running on different GPUs. I want to get the used GPU memory of each process based on its PID. nvidia-smi
yields the information I want; however, I don't know how to grep it, as the output is sophisticated. I have already looked for how to do it, but I have not found any straightforward answers.
CodePudding user response:
While the default output of nvidia-smi
is "sophisticated" or rather formatted for interfacing with humans rather than scripts, the command provides lots of options for use in scripts. The ones most fitting for use case seem to be --query-compute-apps=pid,used_memory
specifying the information that you need and --format=csv,noheader,nounits
specifying the minimal, machine readable output formatting.
So the resulting command is
nvidia-smi --query-compute-apps=pid,used_memory --format=csv,noheader,nounits
I recommend taking a look at man nvidia-smi
for further information and options.
CodePudding user response:
nvidia-smi --query-compute-apps=pid,used_memory,gpu_bus_id --format=csv
gpu_bus_id will help you if you have multiple gpus