Unify docker registry authentication in docker and kubernetes executors
What does this MR do?
Extracts docker registry authentication logic to a single package and reuses it in the docker and kubernetes executors.
This only moves the code from the docker executor and uses it in the kubernetes one. Tests had to be rewritten as well as they were very coupled to the logic being in an executor.
The next step would be to refactor it a bit (Followup issue: #25658):
- wrap the logic in a struct
- inject
HomeDir
as a dependency to it, so that tests don't do package variable setting - inject the struct as a dependency to both executors using it.
Why was this MR needed?
Context in the issue. Users want to be able to authenticate to docker registries for the kubernetes
executor in the same way as the docker one, e.g. DOCKER_AUTH_CONFIG
Are there points in the code the reviewer needs to double check?
Does this MR meet the acceptance criteria?
-
Documentation created/updated -
Added tests for this feature/bug -
In case of conflicts with master
- branch was rebased
Manual testing
Example testing scenario:
- Create a local kubernetes cluster using
kind
- Push a private image to Docker Hub. In my case just cloning golang:
FROM golang:latest
- Register a runner with the following config (edit job token and docker auth, for docker the auth is:
echo -n "my_username:my_password" | base64 | pbcopy
):
kubernetes-config.toml
concurrent = 1
check_interval = 0
[[runners]]
name = "Kubernetes-kind"
url = "https://gitlab.com"
token = "<redacted>"
executor = "kubernetes"
environment=[
"DOCKER_AUTH_CONFIG={\"auths\":{\"https://index.docker.io/v1/\":{\"auth\":\"<redacted>\"}}}"
]
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.kubernetes]
host = "https://127.0.0.1:32768"
ca_file="/etc/ssl/kubernetes/ca.crt"
cert_file="/etc/ssl/kubernetes/api.crt"
key_file="/etc/ssl/kubernetes/kube-kind.key"
bearer_token_overwrite_allowed = false
image = ""
namespace = ""
namespace_overwrite_allowed = ""
privileged = false
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
pull_policy="always"
[runners.kubernetes.pod_security_context]
[runners.kubernetes.volumes]
-
Make sure you've logged out of docker:
docker logout
. Otherwise it might auth using~/.docker/config.json
-
Run a job, which uses the image. In my case: https://gitlab.com/lraykov/cicd-experimentation/-/jobs/531833008
-
If you remove the
environment
setting, the job should fail, e.g. https://gitlab.com/lraykov/cicd-experimentation/-/jobs/531830288
Ran the same tests with the docker executor:
- failed job - https://gitlab.com/lraykov/cicd-experimentation/-/jobs/531931652
- successful job - https://gitlab.com/lraykov/cicd-experimentation/-/jobs/531980830
What are the relevant issue numbers?
Closes #2673 (closed)