I have a simple Webserver that exposes the pod name on which it is located by using the OUT
env var.
Deployment and service look like this:
apiVersion: v1
kind: Service
metadata:
name: simpleweb-service
spec:
selector:
app: simpleweb
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: simpleweb-deployment
labels:
app: simpleweb
spec:
replicas: 3
selector:
matchLabels:
app: simpleweb
template:
metadata:
labels:
app: simpleweb
spec:
containers:
- name: simpleweb
env:
- name: OUT
valueFrom:
fieldRef:
fieldPath: metadata.name
imagePullPolicy: Never
image: simpleweb
ports:
- containerPort: 8080
I deploy this on my local kind cluster
default simpleweb-deployment-5465f84584-m59n5 1/1 Running 0 12m
default simpleweb-deployment-5465f84584-mw8vj 1/1 Running 0 9m36s
default simpleweb-deployment-5465f84584-x6n74 1/1 Running 0 12m
and access it via
kubectl port-forward service/simpleweb-service 8080:8080
When I am hitting localhost:8080
I always get to the same pod
Questions:
- Is my service not doing round robin?
- Is there some caching that I am not aware of
- Do I have to expose my service differently? Is this a kind issue?
CodePudding user response:
port-forward will only select the first pod for a service selector. If you want round-robin you'd need to use a load balancer like traefik or nginx.
CodePudding user response:
To do round-robin and route to different services I have to use a LoadBalancer
service. There is MetalLB that implements a LB for Kind. Unfortunately it currently does not support Apply M1 machines.
I assume that MetallLB LoadBalancer Service would work on a different machine.