helm upgrade ignores serviceAccount, stalls on OpenShift
Summary
Running helm upgrade -f values.yml <releaseName> gitlab/gitlab
creates the upgrade-check job. That job does not have the serviceAccount specified in the values.yml. This causes the job to never create pods on Openshift, therefore stalling the upgrade process. There are other jobs that similarly use different serviceAccounts, but they at least have workarounds for this issue. The upgrade job doesn't seem to use any SA, and can't be upgraded.
Steps to reproduce
helm repo add gitlab https://charts.gitlab.io/
helm repo update
helm install -f values.yml <releaseName> gitlab/gitlab
helm upgrade -f values.yml <releaseName> gitlab/gitlab
Configuration used
(Please provide a sanitized version of the configuration used wrapped in a code block (```yaml))
# Default values for gitlab/gitlab chart
## NOTICE
# Due to the scope and complexity of this chart, all possible values are
# not documented in this file. Extensive documentation for these values
# and more can be found at https://gitlab.com/gitlab-org/charts/gitlab/
## The global properties are used to configure multiple charts at once.
## Extended documenation at doc/charts/globals.md
global:
## doc/installation/deployment.md#deploy-the-community-edition
edition: ce
## doc/charts/globals.md#configure-host-settings
hosts:
domain: domain.com
https: false
ssh: ~
gitlab:
name: git.domain.com
https: false
registry:
name: git-registry.domain.com
https: false
minio:
name: git-minio.domain.com
https: false
smartcard:
name: git-smartcard.domain.com
https: false
kas:
name: git-kas.domain.com
https: false
## doc/charts/globals.md#configure-ingress-settings
ingress:
configureCertmanager: false
enabled: false
## doc/charts/globals.md#configure-gitaly-settings
gitaly:
enabled: true
praefect:
enabled: true
## doc/charts/globals.md#configure-appconfig-settings
## Rails based portions of this chart share many settings
appConfig:
## doc/charts/globals.md#lfs-artifacts-uploads-packages-external-mr-diffs
object_store:
enabled: true
## End of global.appConfig
## doc/charts/globals.md#configure-registry-settings
registry:
bucket: registry
pages:
enabled: true
accessControl: true
path: /
host: pages.domain.com
port: 80
https: false
externalHttp:
- domain.com
- gitlab.domain.com
externalHttps:
- domain.com
- gitlab.domain.com
artifactsServer: true
objectStore:
enabled: true
bucket: "pages"
## Timezone for containers.
time_zone: America/New_York
## docs/charts/globals.md#service-accounts
serviceAccount:
enabled: true
create: false
annotations: {}
## Name to be used for serviceAccount, otherwise defaults to chart fullname
name: <releaseName>
## End of global
## << This has no serviceAccount property.
upgradeCheck:
enabled: true
image:
{}
# repository:
# tag:
securityContext:
# in alpine/debian/busybox based images, this is `nobody:nogroup`
runAsUser: 65534
fsGroup: 65534
tolerations: []
resources:
requests:
cpu: 50m
## Installation & configuration of jetstack/cert-manager
## See requirements.yaml for current version
certmanager:
install: false
## doc/charts/nginx/index.md
## doc/architecture/decisions.md#nginx-ingress
## Installation & configuration of charts/nginx
nginx-ingress:
enabled: false
## Configuration of Redis
## doc/architecture/decisions.md#redis
## doc/charts/redis
redis:
install: true
#existingSecret: gitlab-redis-secret
#existingSecretKey: redis-password
#usePasswordFile: true
cluster:
enabled: true
Current behavior
Upgrade is stalled
Expected behavior
Upgrade to continue
Versions
- Platform:
- Self-hosted: Openshift
- OpenShift version: (
kubectl version
)- Client: 4.6.6
- Server: 4.7.2
- Kubernetes version: v1.20.0+5fbfd19
- Helm: (
helm version
)- Client: v3.4.1
Relevant logs
(Please provide any relevate log snippets you have collected, using code blocks (```) to format)
I have specified a serviceAccount that has both privileged
and anyuid
set to avoid the following error:
Error creating: pods "<releaseName>-gitlab-upgrade-check-" is forbidden: unable to validate against any security context constraint: [provider restricted: .spec.securityContext.fsGroup: Invalid value: []int64{65534}: 65534 is not an allowed group spec.containers[0].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000670000, 1000679999]]