Lets say you have got some program that doesn’t reload when its config changes. You introduce it to Kubernetes via helm. You use a configMap. All is good. Later you do a helm upgrade and… nothing happens. You are sad. You roll up your sleeves, write some code using inotify(), and the program restarts as soon as a change happens to the config. You are happy. Until that day you make a small typo in the config, call helm upgrade, and watch the continuous suicide of your Pods. Now you are sad again. If only there were a better way.
I present to you the better way. And its simple. It solves both problems at once.
Conceptually its pretty simple. You make the ‘name’ of the configMap have its contents-hash in it. Now, when it changes, the Deployment is different, so it will start to replace the Pods. It will ripple through, as the new Pods start, they must come online before the old ones will die. Thus if you have an error, it will not be a problem. Boom!
So here’s a subset of an example:. You’re welcome.
--- apiVersion: v1 kind: ConfigMap metadata: name: {{ include "X.fullname" . }}-\ {{ tpl .Values.config . | sha256sum | trunc 8 }}-\ {{ tpl .Values.application . | sha256sum | trunc 8 }}-\ {{ tpl .Values.logging . | sha256sum | trunc 8 }} labels: app.kubernetes.io/name: {{ include "X.name" . }} helm.sh/chart: {{ include "X.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} data: server.json: {{ tpl .Values.config . | quote }} application.ini: {{ tpl .Values.application . | quote }} logback.xml: {{ tpl .Values.logging . | quote }} ... apiVersion: apps/v1beta2 kind: Deployment ... - mountPath: /X/conf name: {{ include "X.fullname" . }}-config volumes: - name: {{ include "X.fullname" . }}-config configMap: name: {{ include "X.fullname" . }}-\ {{ tpl .Values.config . | sha256sum | trunc 8 }}-\ {{ tpl .Values.application . | sha256sum | trunc 8 }}-\ {{ tpl .Values.logging . | sha256sum | trunc 8 }}