We're using Gitlab for CI/CD. I'll include the script which we're using gitlab ci-cd file
services:
- docker:19.03.11-dind
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?) /i))
when: always
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?) /i))
when: never
stages:
- build
- Publish
- deploy
cache:
paths:
- .m2/repository
- target
build_jar:
image: maven:3.8.3-jdk-11
stage: build
script:
- mvn clean install package -DskipTests=true
artifacts:
paths:
- target/*.jar
docker_build_dev:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?) /i
- developer
docker_build_stage:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- stage
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
variables:
ENV_VAR_NAME: development
before_script:
- apt update
- apt-get install gettext-base
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- cat patient-service.yml | envsubst | kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_DEV}
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?) /i
- developer
deploy_stage:
stage: deploy
image: stellacenter/aws-helm-kubectl
variables:
ENV_VAR_NAME: stage
before_script:
- apt update
- apt-get install gettext-base
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_STAGE $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- cat patient-service.yml | envsubst | kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_STAGE}
only:
- stage
According to the script, we just merged the script not to face conflicts/clashes for stage and development enviornment while deployment. Previously, we having each docker files for each environment(stage and developer). Now I want to merge the dockerfile & k8's yml file also, I merged, but the dockerfile is not fetching. Having clashes (its showing the warning message "back-off restarting failed container"after pipeline succeeds) in Kubernetes. I don't know how to clear the warning in Kubernetes. I'll enclose the docker file and yml file for your reference which I merged.
k8's yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: patient-app
labels:
app: patient-app
spec:
replicas: 1
selector:
matchLabels:
app : patient-app
template:
metadata:
labels:
app: patient-app
spec:
containers:
- name: patient-app
image: registry.gitlab.com/stella-center/backend-services/patient-service:<VERSION>
imagePullPolicy: Always
ports:
- containerPort: 8094
env:
- name: ENV_VAR_NAME
value: "${ENV_VAR_NAME}"
imagePullSecrets:
- name: gitlab-registry-token-auth
---
apiVersion: v1
kind: Service
metadata:
name: patient-service
spec:
type: NodePort
selector:
app: patient-app
ports:
- port: 8094
targetPort: 8094
Docker file
FROM maven:3.8.3-jdk-11 AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install package -DskipTests=true
FROM openjdk:11
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/patient-service-*.jar /app/patient-service.jar
ENV PORT 8094
EXPOSE $PORT
ENTRYPOINT ["java","-Dspring.profiles.active=$ENV_VAR_NAME","-jar","/app/patient-service.jar"]
In dockerfile , before we used the last line, we used before,
ENTRYPOINT ["java","-Dspring.profiles.active=development","-jar","/app/patient-service.jar"] -for developer dockerfile
ENTRYPOINT ["java","-Dspring.profiles.active=stage","-jar","/app/patient-service.jar"] - for stage dockerfile
At the time, its working fine, I'm not facing any issue on Kubernetes. I just added environment variable to fetch along with whether development or stage .I don't know why the warning is happening. Please help me to sort this out . Thanks in advance.
CodePudding user response:
Your Dockerfile uses exec form ENTRYPOINT
syntax. This form doesn't expand environment variables; Spring is literally getting the string $ENV_VAR_NAME
as the profile name, and failing on this.
Spring knows how to set properties from environment variables, though. Rather than building that setting into the Dockerfile, you can use an environment variable to set the profile name at deploy time.
# Dockerfile: do not set `-Dspring.profiles.active`
ENTRYPOINT ["java", "-jar", "/app/patient-service.jar"]
# Deployment YAML: do set `$SPRING_PROFILES_ACTIVE`
env:
- name: SPRING_PROFILES_ACTIVE
value: "${ENV_VAR_NAME}" # Helm: {{ quote .Values.environment }}
However, with this approach, you still need to set deployment-specific settings in your src/main/resources/application-*.yml
file, then rebuild the jar file, then rebuild the Docker image, then redeploy. This doesn't make sense for most settings, particularly since you can set them as environment variables. If one of these values needs to change you can just change the Kubernetes configuration and redeploy, without recompiling anything.
# Deployment YAML: don't use Spring profiles; directly set variables instead
env:
- name: SPRING_DATASOURCE_URL
value: "jdbc:postgresql://postgres-dev/database"
CodePudding user response:
Run the following command to get the output of why your pod crashes:
kubectl describe pod -n <your-namespace> <your-pod>
.
Additionally the output of kubectl get pod -o yaml -n <your-namespace> <your-pod>
has a status section that holds the reason for restarts. You might have to lookup the exit code. E.g. 137 stands for OOM.