Support auto-injection of Kubernetes secrets Kubernetes executor pods
Description
Our organization uses custom tooling to create Kubernetes secrets from external systems that manage secrets and protect sensitive data like Vault. We also use the GitLab runner Helm chart to execute GitLab pipelines in our Kubernetes clusters. Accessing these secrets from systems like Vault in GitLab pipelines, however, is quite difficult. One has to configure the job script to either authenticate to Vault directly and retrieve the desired secret or read the corresponding Kubernetes secret from the API server. In either case, the Docker image being used to execute the job has to include binaries like curl
, vault
, and kubectl
. This often requires building custom images and adds significant overhead
Proposal
Update the GitLab runner Helm chart API with fields that allow consumers to define Kubernetes secrets that they would like to make available to Kubernetes executor pods via either environment variables or files
While I'm not familiar with the implementation of Kubernetes executors, I think this could be achieved by exposing additional arguments to the script that actual spawns Kubernetes executor pods since the script already supports arguments to do things like add additional labels to executor pods
Alternatively, the chart could support optionally creating a PodPreset manifest since PodPresets are a native Kubernetes means to achieve the desired behavior. To use PodPresets, however, the Kubernetes executor pods would likely need a set of pre-configured labels since PodPresets select pods via label selectors. Imagine if the Helm chart API was updated like so:
runners:
## The spec of a Kubernetes PodPreset without the selector
## https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
##
podPreset: {}
# env: []
# envFrom: []
# volumeMounts: []
# volumes: []
If Kubernetes executor pods are pre-configured with a component
label with a value of job
, a new template similar to the following could be added to achieve the desired behavior
{{- if .Values.runners.podPreset }}
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: {{ include "gitlab-runner.fullname" . }}
labels:
app: {{ template "gitlab-runner.name" . }}
chart: {{ template "gitlab-runner.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
selector:
matchLabels:
component: job
{{ toYaml .Values.runners.podPreset | indent 2 }}
{{- end }}
I'd be more than happy to open a merge request if the latter implementation seems viable, but I figured I'd create a feature request first to get feedback. Thanks!