Home > Mobile >  Gitlab CI/CD pipeline passed, but no changes were applied to the server
Gitlab CI/CD pipeline passed, but no changes were applied to the server

Time:01-13

I am testing automation by applying Gitlab CI/CD to a GKE cluster. The app is successfully deployed, but the source code changes are not applied (eg renaming the html title).

I have confirmed that the code has been changed in the gitlab repository master branch. No other branch.

CI/CD simply goes through the process below.

  1. push code to master branch
  2. builds the NextJS code
  3. builds the docker image and pushes it to GCR
  4. pulls the docker image and deploys it in.

The content of the menifest file is as follows.

.gitlab-ci.yml

stages:
  - build-push
  - deploy

image: docker:19.03.12
variables:
  GCP_PROJECT_ID: PROJECT_ID..
  GKE_CLUSTER_NAME: cicd-micro-cluster
  GKE_CLUSTER_ZONE: asia-northeast1-b
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_TLS_CERTDIR: ""
  REGISTRY_HOSTNAME: gcr.io/${GCP_PROJECT_ID}
  DOCKER_IMAGE_NAME: ${CI_PROJECT_NAME}
  DOCKER_IMAGE_TAG: latest
services:
  - docker:19.03.12-dind

build-push:
 stage: build-push
 before_script:
   - docker info
   - echo "$GKE_ACCESS_KEY" > key.json
   - docker login -u _json_key --password-stdin https://gcr.io < key.json
 script:
   - docker build --tag $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG .
   - docker push $REGISTRY_HOSTNAME/$DOCKER_IMAGE_NAME:$DOCKER_IMAGE_TAG

deploy:
  stage: deploy
  image: google/cloud-sdk
  script:
    - export USE_GKE_GCLOUD_AUTH_PLUGIN=True
    - echo "$GKE_ACCESS_KEY" > key.json
    - gcloud auth activate-service-account --key-file=key.json
    - gcloud config set project $GCP_PROJECT_ID
    - gcloud config set container/cluster $GKE_CLUSTER_NAME
    - gcloud config set compute/zone $GKE_CLUSTER_ZONE
    - gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_CLUSTER_ZONE --project $GCP_PROJECT_ID
    - kubectl apply -f deployment.yaml
    - gcloud container images list-tags gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME} --filter='-tags:*' --format="get(digest)" --limit=10 > tags && while read p; do gcloud container images delete "gcr.io/$GCP_PROJECT_ID/${CI_PROJECT_NAME}@$p" --quiet; done < tags

Dockerfile

# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app

# Install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
  if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
  elif [ -f package-lock.json ]; then npm ci; \
  elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \
  else echo "Lockfile not found." && exit 1; \
  fi


# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
# ENV NEXT_TELEMETRY_DISABLED 1

RUN yarn build

# If using npm comment out above and use below instead
# RUN npm run build

# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app

ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
# ENV NEXT_TELEMETRY_DISABLED 1

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

COPY --from=builder /app/public ./public

# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000

ENV PORT 3000

CMD ["node", "server.js"]

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontweb-lesson-prod
  labels:
    app: frontweb-lesson
spec:
  selector:
    matchLabels:
      app: frontweb-lesson
  template:
    metadata:
      labels:
        app: frontweb-lesson
    spec:
      containers:
      - name: frontweb-lesson-prod-app
        image: gcr.io/PROJECT_ID../REPOSITORY_NAME..:latest
        ports:
        - containerPort: 3000
        resources:
          requests:
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
  name: frontweb-lesson-prod-svc
  labels:
    app: frontweb-lesson
spec:
  selector:
    app: frontweb-lesson
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 3000
  type: LoadBalancer
  loadBalancerIP: "EXTERNAL_IP.."

Is there something I'm missing?

CodePudding user response:

By default,imagepullpolicy will be Always but there could be chances if there is no change in the deployment file when applying it might not update the deployment. As you are using the same label each time latest.

As there different between kubectl apply and kubectl patch command

What you can do is add minor label change or annotation change in deployment and check image will get updated with kubectl apply command too otherwise it will be mostly unchange response of kubectl apply

Ref : imagepullpolicy

You should avoid using the :latest tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly.

  • Related