Fargate driver unable to SSH into alpine container
Investigation and Development Tasks
Status update 2022-01-10: Work on the investigation and development task list has not started as this issue needs to be assigned as part of our Runner group iteration planning process.
-
Setup environment to test and reproduce the bug. -
Testing and bug analysis -
Write up solution proposal -
Create MR for solution
Bug Summary
When using an Alpine-based container, it seems as though the fargate runner/driver is unable to authenticate to the task/container if the task/container is running Alpine linux; works fine with debian:buster (however this results in a larger than desired container).
Dockerfile
FROM --platform=linux/x86_64 alpine
# ---------------------------------------------------------------------
# Install https://github.com/krallin/tini - a very small 'init' process
# that helps processing signalls sent to the container properly.
# ---------------------------------------------------------------------
RUN apk update \
&& apk add --no-cache tini
# --------------------------------------------------------------------------
# Install and configure sshd.
# https://docs.docker.com/engine/examples/running_ssh_service for reference.
# --------------------------------------------------------------------------
RUN apk add --no-cache openssh \
&& sed -i s/#PasswordAuthentication.*/PasswordAuthentication\ no/ /etc/ssh/sshd_config \
&& ssh-keygen -A
EXPOSE 22
# ----------------------------------------
# Install GitLab CI required dependencies.
# ----------------------------------------
ARG GITLAB_RUNNER_VERSION=latest
RUN apk add --no-cache curl \
&& curl -Lo /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/${GITLAB_RUNNER_VERSION}/binaries/gitlab-runner-linux-amd64 \
&& chmod +x /usr/local/bin/gitlab-runner \
# Test if the downloaded file was indeed a binary and not, for example,
# an HTML page representing S3's internal server error message or something
# like that.
&& gitlab-runner --version
RUN apk add --no-cache git git-lfs \
&& git lfs install --skip-repo
# -------------------------------------------------------------------------------------
# Execute a startup script.
# https://success.docker.com/article/use-a-script-to-initialize-stateful-container-data
# for reference.
# -------------------------------------------------------------------------------------
COPY docker-entrypoint.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["tini", "--", "/usr/local/bin/docker-entrypoint.sh"]
docker-entrypoint.sh
#!/bin/sh
storeAWSTemporarySecurityCredentials() {
#this function is required to allow the fargate task to be able to use the instance role credentials; useful
# for when you want your fargate task to be able to deploy resources in other accounts.
# Skip AWS credentials processing if their relative URI is not present.
[ -z "$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI" ] && return
# Create a folder to store AWS settings if it does not exist.
USER_AWS_SETTINGS_FOLDER=~/.aws
[ ! -d "$USER_AWS_SETTINGS_FOLDER" ] && mkdir -p $USER_AWS_SETTINGS_FOLDER
# Query the unique security credentials generated for the task.
# https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
AWS_CREDENTIALS=$(curl 169.254.170.2${AWS_CONTAINER_CREDENTIALS_RELATIVE_URI})
# Read the `AccessKeyId`, `SecretAccessKey`, and `Token` values.
AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq '.AccessKeyId' --raw-output)
AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq '.SecretAccessKey' --raw-output)
AWS_SESSION_TOKEN=$(echo $AWS_CREDENTIALS | jq '.Token' --raw-output)
# Create a file to store the temporary credentials on behalf of the user.
USER_AWS_CREDENTIALS_FILE=${USER_AWS_SETTINGS_FOLDER}/credentials
touch $USER_AWS_CREDENTIALS_FILE
# Set the temporary credentials to the default AWS profile.
#
# S3 note: if you want to sign your requests using temporary security
# credentials, the corresponding security token must be included.
# https://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#UsingTemporarySecurityCredentials
echo '[default]' > $USER_AWS_CREDENTIALS_FILE
echo "aws_access_key_id=${AWS_ACCESS_KEY_ID}" >> $USER_AWS_CREDENTIALS_FILE
echo "aws_secret_access_key=${AWS_SECRET_ACCESS_KEY}" >> $USER_AWS_CREDENTIALS_FILE
echo "aws_session_token=${AWS_SESSION_TOKEN}" >> $USER_AWS_CREDENTIALS_FILE
}
setUpSSH(){
if [ -z "$SSH_PUBLIC_KEY" ]; then
echo "Need your SSH public key as the SSH_PUBLIC_KEY env variable."
exit 1
fi
# ALPINE ONLY!
# Assigns the root user a random, strong password.
# This seems to be an Alpine requirement to allow root access even when
# using only public key authentication.
ROOT_PASSWORD=$(tr -dc A-Za-z0-9[]. < /dev/urandom | head -c64; echo)
echo root:${ROOT_PASSWORD} | chpasswd
touch test.txt
# Create a folder to store user's SSH keys if it does not exist.
USER_SSH_KEYS_FOLDER=~/.ssh
[ ! -d "$USER_SSH_KEYS_FOLDER" ] && mkdir -p $USER_SSH_KEYS_FOLDER
# Copy contents from the `SSH_PUBLIC_KEY` environment variable
# to the `$USER_SSH_KEYS_FOLDER/authorized_keys` file.
# The environment variable must be set when the container starts.
echo $SSH_PUBLIC_KEY > ${USER_SSH_KEYS_FOLDER}/authorized_keys
# Clear the `SSH_PUBLIC_KEY` environment variable.
unset SSH_PUBLIC_KEY
# Start the SSH daemon.
/usr/sbin/sshd -D -ddd
}
storeAWSTemporarySecurityCredentials
setUpSSH
Edited by Darren Eastman