Home > database >  How to limit CPU numbers in Docker Client API?
How to limit CPU numbers in Docker Client API?

Time:08-28

I have a script using docker python library or Docker Client API. I would like to limit each docker container to use only 10cpus (total 30cpus in the instance), but I couldn't find the solution to achieve that.

I know in docker, there is --cpus flag, but docker only has cpu_shares (int): CPU shares (relative weight) parameter to use. Does everyone have experience in setting the limit on cpu usage using docker?

import docker
client = docker.DockerClient(base_url='unix://var/run/docker.sock')
container = client.containers.run(my_docker_image, mem_limit=30G)

Edits:

I tried nano_cpus as what here suggests, like client.containers.run(my_docker_image, nano_cpus=10000000000) to set 10CPUS. When I inspectED the container, it did show "NanoCpus": 10000000000". However, if I run the R in the container and do parallel::detectCores(), it still shows 30, which I am confused. I also link R tag now.

Thank you!

CodePudding user response:

All that I see about this is with operating systems. Is there a reason you can't just use these on system? Just asking for clarification here.

If you can use the cmd, then you can use this command:

sudo docker run -it --cpus=1.0 alpine:latest /bin/sh

CodePudding user response:

Setting nana_cpus works and you could use parallelly::availableCores() to detect cpus set by cgroup in r.

  • Related