Home > Mobile >  helm - run pods and dependencies with a predefined flow order
helm - run pods and dependencies with a predefined flow order

Time:02-28

I am using K8S with helm.

I need to run pods and dependencies with a predefined flow order.

How can I create helm dependencies that run the pod only once (i.e - populate database for the first time), and exits after first success?

Also, if I have several pods, and I want to run the pod only on certain conditions occurs and after creating a pod.

Need to build 2 pods, as is described as following:

I have a database.

1st step is to create the database.

2nd step is to populate the db.

Once I populate the db, this job need to finish.

3rd step is another pod (not the db pod) that uses that database, and always in listen mode (never stops).

Can I define in which order the dependencies are running (and not always parallel).

What I see for helm create command that there are templates for deployment.yaml and service.yaml, and maybe pod.yaml is better choice?

What are the best charts types for this scenario?

Thanks.

CodePudding user response:

You can achieve this using helm hooks and K8s Jobs, below is defining the same setup for Rails applications.

The first step, define a k8s job to create and populate the db

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "my-chart.name" . }}-db-prepare
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-weight": "-1"
    "helm.sh/hook-delete-policy": hook-succeeded
  labels:
    app: {{ template "my-chart.name" . }}
    chart: {{ template "my-chart.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  backoffLimit: 4
  template:
    metadata:
      labels:
        app: {{ template "my-chart.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
      - name: {{ template "my-chart.name" . }}-db-prepare
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        command: ["/docker-entrypoint.sh"]
        args: ["rake", "db:extensions", "db:migrate", "db:seed"]
        envFrom:
        - configMapRef:
            name: {{ template "my-chart.name" . }}-configmap
        - secretRef:
            name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
      initContainers:
      - name: init-wait-for-dependencies
        image: wshihadeh/wait_for:v1.2
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        command: ["/docker-entrypoint.sh"]
        args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
        envFrom:
        - configMapRef:
            name: {{ template "my-chart.name" . }}-configmap
        - secretRef:
            name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
      imagePullSecrets:
      - name: {{ .Values.imagePullSecretName }}
      restartPolicy: Never

Note the following : 1- The Job definitions have helm hooks to run on each deployment and to be the first task

    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-weight": "-1"
    "helm.sh/hook-delete-policy": hook-succeeded

2- the container command, will take care of preparing the db

command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]

3- The job will not start until the db-connection is up (this is achieved via initContainers)

args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]

the second step is to define the application deployment object. This can be a regular deployment object (make sure that you don't use helm hooks ) example :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "my-chart.name" . }}-web
  annotations:
    checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum  }}
    checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum  }}
  labels:
    app: {{ template "my-chart.name" . }}
    chart: {{ template "my-chart.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.webReplicaCount }}
  selector:
    matchLabels:
      app: {{ template "my-chart.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum  }}
        checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum  }}
      labels:
        app: {{ template "my-chart.name" . }}
        release: {{ .Release.Name }}
        service: web
    spec:
      imagePullSecrets:
      - name: {{ .Values.imagePullSecretName }}
      containers:
        - name: {{ template "my-chart.name" . }}-web
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          command: ["/docker-entrypoint.sh"]
          args: ["web"]
          envFrom:
          - configMapRef:
              name: {{ template "my-chart.name" . }}-configmap
          - secretRef:
              name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          resources:
{{ toYaml .Values.resources | indent 12 }}
      restartPolicy: {{ .Values.restartPolicy  }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}

CodePudding user response:

if I understand correctly, you want to build a dependency chain in your deployment strategy to ensure certain things are prepared before any of your applications starts. in your case, you want a deployed and pre-populated database, before your app starts.

I propose to not build a dependency chain like this, because it makes things complicated in your deployment pipeline and prevents proper scaling of your deployment processes if you start to deploy more than a couple apps in the future. in highly dynamic environments like kubernetes, every deployment should be able to check the prerequisites it needs to start on its own without depending on a order of deployments.

this can be achieved with a combination of initContainers and probes. both can be specified per deployment to prevent it from failing if certain prerequisites are not met and/or to fullfill certain prerequisites before a service starts routing traffic to your deployment (in your case the database).

in short:

  • to populate a database volume before the database starts, use an initContainer
  • to let the database serve traffic after its initialization and prepopulation, define probes to check for these conditions. your database will only start to serve traffic after its livenessProbe and readinessProbe has succeeded. if it needs extra time, protect the pod from beeing terminated with a startupProbe.
  • to ensure the deployment of your app does not start and fail before the database is ready, use an initContainer to check if the database is ready to serve traffic before your app starts.

check out

for more information.

CodePudding user response:

There is a fixed order with which helm with create resources, which you cannot influence apart from hooks.

Helm hooks can cause more problems than they solve, in my experience. This is because most often they actually rely on resources which are only available after the hooks are done. For example, configmaps, secrets and service accounts / rolebindings. Leading you to move more and more things into the hook lifecycle, which isn't idiomatic IMO. It also leaves them dangling when uninstalling a release.

I tend to use jobs and init containers that blocks until the jobs are done.

---
apiVersion: batch/v1
kind: Job
metadata:
  name: db-create
spec:
  ttlSecondsAfterFinished: 100
  template:
    spec:
      containers:
        - name: db-create
          image: myimage
          command: [db, create]
      restartPolicy: Never
---
apiVersion: batch/v1
kind: Job
metadata:
  name: db-populate
spec:
  ttlSecondsAfterFinished: 100
  template:
    spec:
      initContainers:
        - name: wait-for-db-create
          image: bitnami/kubectl
          args:
            - wait
            - job.batch/db-create
            - --for=condition=complete
            - --timeout=120s
      containers:
        - name: db-populate
          image: myimage
          command: [db, populate]
      restartPolicy: Never
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      initContainers:
        - name: wait-for-db-populate
          image: bitnami/kubectl
          args:
            - wait
            - job.batch/db-populate
            - --for=condition=complete
            - --timeout=120s
      containers:
        - name: myapp
          image: myimage
  • Related