When i try to run kubectl apply -f frontend.yaml
i get the following response from kubectl get pods
and kubectl describe pods
// frontend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: malvacom-frontend
labels:
app: malvacom-frontend
spec:
replicas: 1
selector:
matchLabels:
app: malvacom-frontend
template:
metadata:
labels:
app: malvacom-frontend
spec:
containers:
- name: malvacom-frontend
image: docker.io/forsrobin/malvacom_frontend
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 15
timeoutSeconds: 2
periodSeconds: 5
failureThreshold: 1
readinessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 15
periodSeconds: 5
failureThreshold: 1
command: [ "sleep" ]
args: [ "infinity" ]
and then the responses are
kubectl get pods
malvacom-frontend-8575c8548b-n959r 0/1 CrashLoopBackOff 5 (95s ago) 4m38s
kubectl describe pods
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17s default-scheduler Successfully assigned default/malvacom-frontend-8575c8548b-n959r to shoot--p1622--malvacom-web-xdmoi2-z1-54776-bpjpw
Normal Pulled 15s (x2 over 16s) kubelet Container image "docker.io/forsrobin/malvacom_frontend" already present on machine
Normal Created 15s (x2 over 16s) kubelet Created container malvacom-frontend
Normal Started 15s (x2 over 16s) kubelet Started container malvacom-frontend
Warning BackOff 11s (x4 over 14s) kubelet Back-off restarting failed container
As I understran the pod starts but because it has no continues task to do kubernetes removes/stops the pod. I can run the image localy without any problem and if i for example use another image thenetworkchuck/nccoffee:pourover
it works without any problems. This is my Dockerfile
FROM node:alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY ./package.json /app/
RUN yarn --silent
COPY . /app
RUN yarn build
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
CodePudding user response:
You're explicitly telling Kubernetes to not run its normal server
command: [ "sleep" ]
args: [ "infinity" ]
but then it should pass an HTTP health check
livenessProbe:
httpGet:
path: /index.html
port: 80
Since sleep infinity
doesn't run an HTTP server, this probe will never pass, which causes your container to get killed and restarted.
You shouldn't need to do artificial things to "keep the container alive"; delete the command:
and args:
override. (The Dockerfile CMD
is correct, but you get an identical CMD
from the base nginx
image and you don't need to repeat it.)