Home > OS >  TCP server in a docker swarm Deployment and docker swarm load balancing
TCP server in a docker swarm Deployment and docker swarm load balancing

Time:11-18

I am trying to understand how the docker swarm does the load balancing and how it effects the design of the socket server (since the server has to accept the client connection to get the socket object that it uses to return the result of the service), to do this I created the following echo server:

# server.py
class Server:
    def __init__(self, port=5050, header_size=64, encode_format='utf-8'):
        port = port
        host = "localhost"
        self.addr = (host, port)
        self.header_size = header_size
        self.encode_format = encode_format
        self.disconnect_message = 'DISCONNECT'

        self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        self.server.bind(self.addr)
        self.id = np.random.randint(1024)

    def do_stuff(self, conn, addr):
        msg_length = conn.recv(self.header_size).decode(self.encode_format)
        msg_length = int(msg_length)

        # THIS PART WAS REMOVED TO KEEP THINGS SIMPLE
        # message = conn.recv(msg_length) #.decode(self.encode_format)
        # while len(message) < msg_length:
        #     packet = conn.recv(msg_length-len(message)) #.decode(self.encode_format)
        #     message  = packet

        print(f"[MSG] {msg_length} from {addr}")
        self.send(conn, f"{msg_length} response from {self.id}")
        return True

    def send(self, conn, msg):
        message = pickle.dumps(msg)
        msg_length = len(message)
        send_length = str(msg_length).encode(self.encode_format)
        send_length  = b' ' * (self.header_size - len(send_length))

        # print("Sending response length")
        conn.send(send_length)
        # print("Sending response data")
        # conn.send(message)

    def handle_client(self, conn, addr):
        print("-------------------------------------")
        print(f"[CLIENT] new client {addr} connected.")
        connected = True
        while connected:
            connected = self.do_stuff(conn, addr)
        print(f"[CLIENT] {addr} has disconnected")
        conn.close()
        print(f"[Active CONNECTIONS] {threading.active_count() -2}")

    def start(self):
        self.server.listen()
        print(f"[LISTENING] Server is listening on {self.addr[0]} port {self.addr[1]}")
        print(f"[LISTENING] Server is listening on {self.addr[0]} port {self.addr[1]}")
        try:
            while True:
                print("Listening . . .")
                conn, addr = self.server.accept()
                thread = threading.Thread(target=self.handle_client, args=(conn, addr))
                thread.start()
                print(f"[Active CONNECTIONS] {threading.active_count() -1}")
                sleep(2)
        except KeyboardInterrupt:
            print("Interrupt signal received from Keyboard")
        self.server.close()
        print(f"[STOP] self.server {self.addr[0]} has stopped listening on port {self.addr[1]}")

if __name__ == "__main__":
    server = Server()
    server.start()

I was able to run the server within a docker container using the following Dockerfile:

FROM python:3.6.9
COPY App /App
WORKDIR /App
RUN pip3 install -U pip
RUN pip3 install numpy
EXPOSE 5050
ENTRYPOINT [ "python3" ]
CMD [ "server.py" ]

docker run command:

sudo docker run -it --rm -p 5050:5050 --name test IMAGE_NAME

Client

# Client
PORT = 5050
HOST = "SERVER_IP" # or the IP of the server if is running on a different machine
ADDR = (HOST, PORT)
HEADER = 64
FORMAT = 'utf-8'
DISCONNECT_MESSAGE = 'DISCONNECT'

def send(sock, msg):
    message = pickle.dumps(msg)
    msg_length = len(message)
    send_length = str(msg_length).encode(FORMAT)
    send_length  = b' ' * (HEADER - len(send_length))

    sock.send(send_length)

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(ADDR)
for i in range(200):
    msg = f"Message number {i}"
    send(sock, msg)
    response = receive(sock)
    print(response)
sock.close()

but when deployed in a in a docker swarm service the client hangs on sock.connect() (I have 2 worker nodes and 1 manager node in the swarm )

service create command:

sudo docker service create --name echo_server -p 5050:5050 --replicas 2 PRIVATE_REGISTRY_IP:PORT/IMAGE_NAME:TAG

All swarm server examples I found use the nginx image which I was able to deploy on a worker node then access it using the IP of the manager node (Which is what I am trying to achieve for a TCP socket server).

  1. What am I missing? What am I doing wrong?
  2. Is there something different in the design of the TCP socket server that I should do?
  3. How is the client connecting to the server with the implementation of the swarm load balancer?
  4. Is it always being sent to the same server that it was first connected to?
  5. Is every request sent to a different server?
  6. Do all the servers have 1 client which is the load balancer (in that case there is no need to open a thread for every thread), and the load balancer has the info of the clients?

Solution

I was able to deploy the server in a stack

stack.yml

version: "3"
services:
  echo_server:
    image: REGISTRY_IP/IMAGE_NAME
    ports:
      - "5050:5050"
    deploy:
      replicas: 3

and then run

sudo docker stack deploy -c /path/to/stack.yaml STACK_NAME

and then the client was able to connect to the service through the host IP of one of the worker/manager node PCs

CodePudding user response:

Docker swarm has, for this purpose, two load balancers:

The ingress load balancer is invoked when you publish a port from a service. That port is published from all swarm nodes, so any node can be used to connect to the service. And docker will round-robin connections to any available (healthy) service replicas.

Because this traffic arrives over a bridge interface, you cannot listen on "localhost" as that will prevent the container actually listening for bridged connections.

Note that while each swarm node would accept connections from localhost:port to connect to the ingress network, containers cannot use "localhost" to connect to it as that refers to the localhost interface inside the container, not the host. The correct way to connect to the current local host from inside a container would be the address: "docker.host.internal:8080" (this needs to be enabled on each service that needs to use it).

the other load balancer is the mesh networking load balancer. When deploying a service in a stack, docker modifies the resolve.conf in each container to point to a docker resolver that will resolve the service names for each network the container is attached to. Docker swarm also creates a virtal-ip for the service which is a layer 3 load balancer, and associates it with the dns name for the service on each network.

So, if you deployed a service called "echotest" as part of a stack called "test", and 2 replicas were created then docker would assign the following:

A vip with the ip 10.0.1.5 with the names "echotest" and "test_echotest".

Two tasks, with the IPs 10.0.1.6 and 10.0.1.7 respectively.

Any other service attached to the "test_default" network (10.0.1.0/24) would be able to use the names "echotest" and "test_echotest" to connect to the vip, which in turn would actually connect to one of the containers.

A service that did its own load balancing could use the special names "tasks.echotest" and "tasks.test_echotest" to get a dns rr response: just the task ups with their order randomised. [10.0.1.6,10.0.1.7]

This set of commands shows the results on my swarm.

$ docker network create --attachable --driver overlay test
$ docker service create --network test --replicas 2 --name test-nginx nginx:latest
$ docker run --rm -it --network test nicolaka/netshoot
$ dig test-nginx  short
10.0.38.2
$ dig tasks.test-nginx  short
10.0.38.3
10.0.38.6
$ exit
$ docker service rm test-nginx
$ docker network rm test
  • Related