Ability to turn on/off running umask 0000 command for Kubernetes executor
What does this MR do?
Helper image executes umask 0000
which results in 777 for directory permissions and 666 for file permissions. This makes all directories and files in the build container writable by anyone and is not best practice.
When FF_DISABLE_UMASK_FOR_KUBERNETES_EXECUTOR
, executorkubernetes adds a new init container which creates an empty file in the build_dir
directory with the build image uid:gid
.
This temporary file ownership is then used to update the following files ownership:
- Project directory
- Temporary Project directory
- Cache directory
I was initially thinking about setting these ownership through the new init container as suggested in this comment #28867 (comment 1899638464). However, the needed directories weren't already created and some permissions issues were occuring when the chown
command was run from the init container.
The new init container is created using the resources requests/limits and security context set for the build container.
Why was this MR needed?
This MR is needed to prevent the directories listed above to be writable by anyone
What's the best way to test this MR?
config.toml
concurrent = 1
check_interval = 1
log_level = "debug"
shutdown_timeout = 0
listen_address = ':9252'
[session_server]
session_timeout = 1800
[[runners]]
name = "investigation"
url = "https://gitlab.com/"
id = 0
token = "glrt-REDACTED"
token_obtained_at = "0001-01-01T00:00:00Z"
token_expires_at = "0001-01-01T00:00:00Z"
executor = "kubernetes"
shell = "bash"
limit = 1
builds_dir = "/my_custom_dir"
[runners.kubernetes]
host = ""
bearer_token_overwrite_allowed = false
image = "alpine"
pod_termination_grace_period_seconds = 0
namespace = ""
namespace_overwrite_allowed = ""
pod_labels_overwrite_allowed = ""
service_account_overwrite_allowed = ""
pod_annotations_overwrite_allowed = ""
node_selector_overwrite_allowed = ".*"
allow_privilege_escalation = false
[[runners.kubernetes.volumes.empty_dir]]
name = "repo"
mount_path = "/my_custom_dir"
[runners.kubernetes.build_container_security_context]
run_as_user = 1000
run_as_group = 65533
gitlab-ci.yaml
variables:
FF_DISABLE_UMASK_FOR_KUBERNETES_EXECUTOR: "true"
FF_USE_POWERSHELL_PATH_RESOLVER: "true"
FF_RETRIEVE_POD_WARNING_EVENTS: "true"
FF_PRINT_POD_EVENTS: "true"
FF_SCRIPT_SECTIONS: "true"
CI_DEBUG_SERVICES: "true"
GIT_DEPTH: 5
MY_TEST_VARIABLE_1: gitlab-ci
MY_TEST_VARIABLE_2: gitlab-ci
SAST_GOSEC_LEVEL: 2
simple-job:
script:
- ls -la /my_custom_dir
- ls -l /my_custom_dir/ra-group2
Extract of job log
Executing "step_script" stage of the job script
00:00
$ ls -la /my_custom_dir
total 12
drwxrwxrwx 3 root root 4096 Jun 28 12:26 .
drwxr-xr-x 1 root root 4096 Jun 28 12:25 ..
-rw-r--r-- 1 1000 nogroup 0 Jun 28 12:25 .giltab-build-uid-gid
drwxrwxrwx 4 root root 4096 Jun 28 12:26 ra-group2
$ ls -l /my_custom_dir/ra-group2
total 8
drwxrwxrwx 4 1000 nogroup 4096 Jun 28 12:26 playground-bis
drwxrwxrwx 3 1000 nogroup 4096 Jun 28 12:26 playground-bis.tmp
Cleaning up project directory and file based variables
00:01
Job succeeded
What are the relevant issue numbers?
close #28867 (closed)