I want to update the existing file settings.ini
in container at the time of helm deployment without harming existing data in container.
Here are my helm files -
config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.metadata.name }}-config
data:
settings.ini: |
[server]
hostname = "localhost"
hot_deployment = false
# offset = 10
[user_store]
type = "read_only_ldap"
deployment part from Deployment.yaml
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: config-volume
mountPath: /home/bin/conf/
subPath: settings.ini
volumes:
- name: config-volume
configMap:
name: {{ .Values.metadata.name }}-config
I can see it creating config map successfully but when I check logs in the cluster it complains about missing files
How can I resolve this issue?
Also is there any efficient way to achieve this, if I just want to update some of the configurations for my settings.ini
file with different values in different env?
CodePudding user response:
If you are on older version of Kubernetes changing the configmap & secret won't restart your container.
You can manually update the configmap using the YAML apply or using the CLI edit.
Once configmap is updated you can simply apply the helm command to roll out the update and change/udpate the new container. In this case old container wont get affected and changes simply get rollout.
If you are looking for way to automate it, you can use the : https://github.com/stakater/Reloader
Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets Statefulsets and Rollouts.
Here is one nice example : https://github.com/stakater/Reloader/issues/46#issuecomment-457131306
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
annotations:
configmap.reloader.stakater.com/reload: "nginx-configmap"
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
name: nginx
image: nginx
ports:
containerPort: 80
volumeMounts:
name: nginx-config1
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
volumes:
name: nginx-config1
configMap:
name: nginx-configmap
CodePudding user response:
In the recent versions of k8s Mounted ConfigMaps are Updated Automatically:
When a ConfigMap currently consumed in a volume is updated, projected keys are eventually updated as well. The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync.
A container using a subPath volume mount may not receive the updates.
Possible solution:
Use two containers:
- Main Container: Will run your application, listen to a specific port/endpoint_path for HTTP requests. On receiving the request, your application should reload the configurations. Since the
subPath
method is avoided, you may store the configMap to some other directory, later move/update to the desired directory while reloading configurations. - SideCar: Will run file change watcher like configmap-reload, on changes of file it will send an HTTP request to the main container.
Sample:
containers:
- name: main-container
image: application:v1.0.0
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: config-volume
mountPath: /some/path
- name: side-car
image: jimmidyson/configmap-reload:latest
args:
- "-volume-dir=/some/path"
- "-webhook-url=localhost:80/reload-config"
volumeMounts:
- name: config-volume
mountPath: /some/path
volumes:
- name: config-volume
configMap:
name: {{ .Values.metadata.name }}-config