Home > Enterprise >  K8S - run and build database once, whenever used in main chart and with main handling in sub-chart
K8S - run and build database once, whenever used in main chart and with main handling in sub-chart

Time:04-07

I am using Kubernetes with Helm 3.3.4.

I have a MySQL 5.6 database that is declared as a Helm chart when calling helm install ... .

For deletion I am using helm delete .... What happens - all the database is deleted (it is handled by the sub-chart, which is also deleted).

I have tried to use dependencies for the sub-chart, but I don't really know how to do it in my own cluster on my own stand (before using external registry) with keeping the data and not delete it (maybe using a StatefulSet?)

I need to create the database once and populate it (the job which do populate into the database can be ran several times, because it check for database integrity, and do population only when needed). This should all be done with one helm install command.

Chart type I use for the database are Pod and Job (job for populating the database), and also Role and RoleBinding, and a Service for the sub-domain.

When I populate some data into the database - there is a job for that, which do some check for population, so I the population job can be ran many times, because the job is responsible for checking the db.

The first creation of the database need to be created once.

For the database creation and population I used a Helm sub-chart, which also listen to some jobs.

Basic code concept of the Helm YAML files looks like this:

In the main project, first I create the MySQL database:

apiVersion: v1
kind: Pod
metadata:
  name: {{ .Release.Namespace }}-mysql
  namespace: {{ .Release.Namespace }}
  labels:
    name: {{ .Release.Namespace }}-mysql
    app: {{ .Release.Namespace }}-mysql
spec:
  hostname: {{ .Release.Name }}-mysql
  subdomain: {{ .Release.Name }}-subdomain # there must be a service
  containers:
    - name: {{ .Release.Name }}-mysql
      image: {{ .Values.global.registry}}/{{ .Values.global.mysql.image }}:{{ .Values.global.mysql.tag | default "latest" }}
      imagePullPolicy: IfNotPresent
      env:
      {{- include "k8s.db.env" . | nindent 8}}
      ports:
        - name: mysql
          protocol: TCP
          containerPort: {{ .Values.global.mysql.port }}
      resources: {{- toYaml .Values.global.mysql.resources | nindent 8 }}

The main service listens for an external REST request and makes some use of the sub-project.

The sub-project contains a migration job for creating and populating the database:

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ include "mySubproject.fullname" . }}-migration-job
  namespace: {{ .Release.Namespace }}
  labels:
    name: {{ include "mySubproject.fullname" . }}-migration-job
    app: {{ include "mySubproject.fullname" . }}-migration-job
spec:
  template: #PodTemplateSpec (Core/V1)
    spec: #PodSpec (core/v1)
      initContainers:
        - name: wait-mysql-exist-pod
          image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
          imagePullPolicy: IfNotPresent
          ...
          ... # db wait for mysql pod first time to exists
        - name: wait-mysql-ready
          image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
          imagePullPolicy: IfNotPresent
          ...
          ... # wait for mysql to be ready state
        - name: wait-mysql-has-db
          image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
          imagePullPolicy: IfNotPresent
          ...
          ... # wait for mysql to have a database
      containers:
        - name: migrate-db
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: {{ .Values.global.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag | default "latest" }}
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ...
          ... # db migration ...
      restartPolicy: Never

The role to enable the migration job:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: {{ include "mySubProject.fullname" . }}-mysql-role
rules:
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list", "update"]
- apiGroups: [""] # "" indicates the core API group
  resources: ["pods/exec"]
  verbs: ["create", "delete", "deletecollection", "get", "list", "patch", "update", "watch"]
- apiGroups: ["", "app", "batch"] # "" indicates the core API group
  resources: ["jobs"]
  verbs: ["get", "watch", "list"]
---      
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: {{ include "mySubProject.fullname" . }}-mysql-rolebinding
subjects:
- kind: ServiceAccount
  name: default
roleRef:
  kind: Role
  name: {{ include "mySubProject.fullname" . }}-mysql-role
  apiGroup: rbac.authorization.k8s.io  

The listener job:

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ include "mySubProject.fullname" . }}-listen-job
  namespace: {{ .Release.Namespace }}
  labels:
    name: {{ include "mySubProject.fullname" . }}-listen-job
    app: {{ include "mySubProject.fullname" . }}-listen-job
  annotations:
    "prometheus.io/scrape": {{ .Values.prometheus.scrape | quote }}
    "prometheus.io/path": {{ .Values.prometheus.path }}
    "prometheus.io/port": {{ .Values.ports.api.container | quote }}
spec:
  template: #PodTemplateSpec (Core/V1)
    spec: #PodSpec (core/v1)
      initContainers:
        - name: wait-migration-job-exists
          image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
          imagePullPolicy: IfNotPresent
          ... 
          ... # wait for job for existance
        - name: wait-migration-job-complete
          image: {{ .Values.global.registry }}/{{ .Values.global.k8s.image }}:{{ .Values.global.k8s.tag | default "latest" }}
          imagePullPolicy: IfNotPresent
          ...
          ... # wait for job to complete
      containers:
        - name: listen-db
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: {{ .Values.global.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag | default "latest" }}
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          # listen the external requests and database

      restartPolicy: Never

What is the methodology for helm to create some pods once, and not re-creating it from scratch? Is helm install command ok?

CodePudding user response:

What is the methodology for helm to create some pods once

Simply don't create pods directly, use controller such as deployment, or in your case StatefulSet would fit better

CodePudding user response:

The most obvious problem with what you've shown here is that there is no persistence behind the MySQL database. The database itself gets stored in the Pod's local storage. If the Pod is ever deleted at all for any reason (including things outside your control like its Node failing) then your database will be lost.

The easiest way to address this is to not install MySQL yourself; instead, declare a Helm dependency on a prebuilt MySQL chart. The Bitnami MySQL chart could be a good option here. Include in your Chart.yaml file:

dependencies:
  - name: mysql
    version: ^8
    repository: https://charts.bitnami.com/bitnami

This will give you, among other things, a Service named {{ .Release.Name }}-mysql that you can use as the database host name.

If you don't want to use this chart, you should convert your MySQL installation to a StatefulSet. This has a point to create a template for a PersistentVolumeClaim which can allocate persistent storage. The PVC will have a consistent name, and helm delete will not delete it. So then if you do delete and reinstall your chart, it will create a new StatefulSet, which will create a new (primary) Pod, which will reattach to the existing PVC.

(You should almost never create a bare Pod. Depending on what specifically you're trying to do, create a Deployment, Job, or StatefulSet instead, and that will automatically create [and recreate if required] the actual Pods.)

  • Related