Home > Blockchain >  how to limit container running on different node use docker stack deploy
how to limit container running on different node use docker stack deploy

Time:04-07

I have three nodes in docker swarm (all nodes are manager) I want to run zookeeper cluster on these three nodes

my docker-compose file

version: '3.8'
services:
  zookeeper1:
    image: 'bitnami/zookeeper:latest'
    hostname: "zookeeper-1"
    ports:
      - '2181'
      - '2888'
      - '3888'
    volumes:
      - "zookeeper-1:/opt/bitnami/zookeeper/conf"
    environment:
      - ZOO_SERVER_ID=1
      - ZOO_SERVERS=0.0.0.0:2888:3888,zookeeper-2:2888:3888,zookeeper-3:2888:3888
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - network_test
  zookeeper2:
    image: 'bitnami/zookeeper:latest'
    hostname: "zookeeper-2"
    ports:
      - '2181'
      - '2888'
      - '3888'
    volumes:
      - "zookeeper-2:/opt/bitnami/zookeeper/conf"
    environment:
      - ZOO_SERVER_ID=2
      - ZOO_SERVERS=zookeeper-1:2888:3888,0.0.0.0:2888:3888,zookeeper-3:2888:3888
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - network_test
  zookeeper3:
    image: 'bitnami/zookeeper:latest'
    hostname: "zookeeper-3"
    ports:
      - '2181'
      - '2888'
      - '3888'
    volumes:
      - "zookeeper-3:/opt/bitnami/zookeeper/conf"
    environment:
      - ZOO_SERVER_ID=3
      - ZOO_SERVERS=zookeeper-1:2888:3888,zookeeper-2:2888:3888,0.0.0.0:2888:3888
      - ALLOW_ANONYMOUS_LOGIN=yes
    networks:
      - network_test

I use docker stack deploy to run, my expect is each zookeeper will run on different node, but sometimes one node will start two zookeeper conatiners

Does the docker stack deploy can have this feature??

thanks

CodePudding user response:

To start a service on each available node in your Docker Swarm cluster you need to run it in global mode.

But, in your case because of the specific volumes for each Zookeeper you can use placement constraints to control the nodes a service can be assigned to. So you can add the following section to each Zookeeper service which will allow each instance to run on a different node:

services:
  ...
  zookeeper-1:
   ...
   deploy:
      placement:
        constraints:
          - node.hostname==node1

CodePudding user response:

If you roll your zookeepers into a single service, then you can use max_replicas_per_node.

Like this:

version: "3.9"

volumes:
  zookeeper:
    name: '{{index .Service.Labels "com.docker.stack.namespace"}}_zookeeper-{{.Task.Slot}}'

services:
  zookeeper:
    image: zookeeper:latest
    hostname: zoo{{.Task.Slot}}
    volumes:
      - zookeeper:/conf
    environment:
      ZOO_MY_ID: '{{.Task.Slot}}'
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
      ALLOW_ANONYMOUS_LOGIN: 'yes'
    ports:
    - 2181:2181
    deploy:
      replicas: 3
      placement:
        max_replicas_per_node: 1
        constraints:
        - node.role==worker
#        - node.labels.zookeeper==true
  • here I use dockers service templates to assign each replica a sequential hostname leveraging dockers task's which are assigned a slot id.
  • I also name the volume to ensure that each zookeeper accesses its own data, so that "zoo2" will never try to access the data written by a "zoo1" or "zoo3" instance. This allows the volume to potentially be mapped to a network share.
  • finally, replicas and max_replicas_per_node ensure that 3 zoo tasks are started and don't share nodes. Allthough, given the volumes don't conflict, its not really a big deal.

The problem with this approach is that, docker swarm will, over time, move the zookeeper task replicas onto different swarm nodes, but local volumes are local to each node by default, so while "stack_zookeeper-1" will be created on each node where "zoo1" is scheduled it will contain different data on each node unless you have a network mount to share your swarm volumes.

It really looks like zookeper doesn't use a persistent volume for anything and this is just for configuration files anyway, so really this is totally unnecessary for this particular case.

  • Related