Home > other >  Kubernetes deploying angular frontend crash
Kubernetes deploying angular frontend crash

Time:12-17

am trying to deploy angular frontend app on kubernetes, but i always get this error:

NAME                              READY   STATUS             RESTARTS   AGE
common-frontend-f74c899cc-p6tdn   0/1     CrashLoopBackOff   7          15m

when i try to see logs of pod, it print just empty line, so how can i find out where could be problem

this is dockerfile, build pipeline with this dockerfile alwys passed:

### STAGE 1: Build ###

# We label our stage as 'builder'
FROM node:10.11 as builder

COPY package.json ./
COPY package-lock.json ./

RUN npm set progress=false && npm config set depth 0 && npm cache clean --force
ARG NODE_OPTIONS="--max_old_space_size=4096"
## Storing node modules on a separate layer will prevent unnecessary npm installs at each build
RUN npm i && mkdir /ng-app && cp -R ./node_modules ./ng-app

WORKDIR /ng-app

COPY . .

## Build the angular app in production mode and store the artifacts in dist folder
RUN $(npm bin)/ng build --prod --output-hashing=all

### STAGE 2: Setup ###

FROM nginx:1.13.3-alpine

## Copy our default nginx config
COPY nginx/default.conf /etc/nginx/conf.d/

## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*

## From 'builder' stage copy the artifacts in dist folder to default nginx public folder
COPY --from=builder /ng-app/dist /usr/share/nginx/html

CMD ["nginx", "-g", "daemon off;"]

and deployment.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: common-frontend
  labels:
    app: common-frontend
spec:
  type: ClusterIP
  selector:
    app: common-frontend
  ports:
  - port: 80
    targetPort: 8080

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: common-frontend
  labels:
    app: common-frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: common-frontend
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 33%
  template:
    metadata:
      labels:
        app: common-frontend
    spec:
      containers:
      - name: common-frontend
        image: skunkstechnologies/common-frontend:<VERSION>
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
          initialDelaySeconds: 30
          timeoutSeconds: 1

I really dont know what could be problem,can anyone help? Thanks!

CodePudding user response:

Looks like Kubernetes fails liveness probe and will restart pod. Try to comment 'liveness probe' section and start it again. If it helps, correct probe parameters -- timeout, delay, etc.

CodePudding user response:

Hmm. Your container dies and tries to restart. First of all, try to look at his logs and status:

kubectl logs <container_name>
kubectl describe pod <container_name>

CodePudding user response:

Like i said, logs command just write emtpy line, but there is describe:

Name:         common-frontend-f74c899cc-p6tdn
Namespace:    default
Priority:     0
Node:         pool-o9sbqlapb-ubj9o/10.135.170.45
Start Time:   Fri, 17 Dec 2021 13:26:35  0100
Labels:       app=common-frontend
              pod-template-hash=f74c899cc
Annotations:  <none>
Status:       Running
IP:           10.244.0.248
IPs:
  IP:           10.244.0.248
Controlled By:  ReplicaSet/common-frontend-f74c899cc
Containers:
  common-frontend:
    Container ID:   containerd://87622688065a1afea051303c30ec38e13d523de15d0bbd5d7d22ddddfcabb797
    Image:          skunkstechnologies/common-frontend:96d415fc
    Image ID:       docker.io/skunkstechnologies/common-frontend@sha256:3f38eef817cf12836c7eee9e3aabc30ac1fff142b6ab1dd86e1cab5fa22c51cb
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 17 Dec 2021 14:34:46  0100
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 17 Dec 2021 14:33:47  0100
      Finished:     Fri, 17 Dec 2021 14:34:45  0100
    Ready:          True
    Restart Count:  23
    Liveness:       http-get http://:8080/health delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jgl5b (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-jgl5b:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From     Message
  ----     ------     ----                   ----     -------
  Warning  Unhealthy  13m (x60 over 68m)     kubelet  Liveness probe failed: Get "http://10.244.0.248:8080/health": dial tcp 10.244.0.248:8080: connect: connection refused
  Normal   Pulled     8m49s (x21 over 68m)   kubelet  Container image "skunkstechnologies/common-frontend:96d415fc" already present on machine
  Warning  BackOff    3m54s (x212 over 63m)  kubelet  Back-off restarting failed container
  • Related