Home > Net >  How to define multiple gres resources in SLURM using the same GPU device?
How to define multiple gres resources in SLURM using the same GPU device?

Time:12-06

I'm running machine learning (ML) jobs that make use of very little GPU memory. Thus, I could run multiple ML jobs on a single GPU.

To achieve that, I would like to add multiple lines in the gres.conf file that specify the same device. However, it seems the slurm deamon doesn't accept this, the service returning:

fatal: Gres GPU plugin failed to load configuration

Is there any option I'm missing to make this work?

Or maybe a different way to achieve that with SLURM?

It is kind of smiliar to this one, but this one seems specific to some CUDA code with compilation enabled. Something which seems way more specific than my general case (or at least as far as I understand). How to run multiple jobs on a GPU grid with CUDA using SLURM

CodePudding user response:

I don't think you can oversubscribe GPUs, so I see two options:

  1. You can configure the CUDA Multi-Process Service or
  2. pack multiple calculations into a single job that has one GPU and run them in parallel.

CodePudding user response:

Besides nVidia MPS mentioned by @Marcus Boden, which is relevant for V100 types of cards, there also is Multi-Instance GPU which is relevant for A100 types of cards.

  • Related