Reach Us

Autoscaling of Gitlab Runner with spot instances on EKS

Our client required an in-house Gitlab Runner instead of using a shared runner. A self managed runner offers more flexibility and higher security to run heavy jobs or multiple parallel jobs with efficiency. Let us consider a job where SonarQube that handles your code quality analysis is hosted in your internal environment. Here, if the runner is also in-house, then we don’t need to expose SonarQube publicly.

Prerequisites for gitlab-runner setup

  • EKS Cluster
  • kubectl
  • Helm and openssl

Installation of gitlab-runner

Edit values.yaml for required configuration

You can use a configuration template file to configure the runner and any field on it.

runners:

config: |

[[runners]]

[runners.kubernetes]

image = “ubuntu:16.04”

namespace = “{{.Release.Namespace}}”

All configuration options supported by the Kubernetes executor are listed in the Kubernetes executor docs, for many of the fields.

We can set memory and CPU limits for the helper runner under which our job will run.

config: |

[[runners]]

[runners.kubernetes]

namespace = “{{.Release.Namespace}}”

image = “ubuntu:16.04”

privileged = true

helper_cpu_limit = “2”

helper_memory_limit = “2Gi”

You can also enable the privileged option under this configuration template which will help you run docker-in-docker jobs. For this, you have to specify the executor under the runner section.

config: |

[[runners]]

executor = “kubernetes”

environment = [“DOCKER_HOST=tcp://localhost:2376”]

[runners.kubernetes]

privileged = true

[[runners.kubernetes.volumes.empty_dir]]

name = “docker-certs”

mount_path = “/certs/client”

medium = “Memory”

Install the runner

helm repo add gitlab https://charts.gitlab.io

helm install –namespace gitlab-runner  gitlab-runner gitlab/gitlab-runner –create-namespace

Register the runner

helm upgrade gitlab-runner  –set gitlabUrl=https://gitlab.com,runnerRegistrationToken=

<xxxxxxxx>  gitlab/gitlab-runner –namespace=gitlab-runner

For gitlab Token goto, Settings -> CI/CD ->Runners

Create a Role and role binding for the jobs to run on K8s

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

name: gitlab-runner

namespace: gitlab-runner

rules:

– apiGroups: [“”]

resources: [“pods”, “secrets”, “configmaps”, “pods/exec”, “pods/attach”]

verbs: [“get”, “list”, “watch”, “create”, “patch”, “delete”, “update”]

Create Role binding

kubectl create rolebinding –namespace=gitlab-runner gitlab-runner-binding –role=gitlab-runner –serviceaccount=gitlab-runner:default

Your gitlab runner is installed on your cluster.

Now, for further configuration, you can add values under the config.runner section.

Let us see how we can do caching in runner with s3

To use the cache with your configuration template, set the following variables in values.yaml.

runners:

config: |

[[runners]]

executor = “kubernetes”

environment = [“DOCKER_HOST=tcp://localhost:2376”]

[runners.kubernetes]

namespace = “{{.Release.Namespace}}”

image = “ubuntu:16.04”

privileged = true

helper_cpu_limit = “2”

helper_memory_limit = “2Gi”

[[runners.kubernetes.volumes.empty_dir]]

name = “docker-certs”

mount_path = “/certs/client”

medium = “Memory”

[runners.cache]

Type = “s3”

Path = “cache”

Shared = true

[runners.cache.s3]

ServerAddress = “s3.amazonaws.com”

AccessKey = ” “

SecretKey = ” “

BucketName = “abcd”

BucketLocation = “REGION”

You can also use type secret to pass accesskey and secretkey.

kubectl create secret generic s3runner-access -n gitlab-runner

–from-literal=accesskey=”YourAccessKey”

–from-literal=secretkey=”YourSecretKey”

Under the runner.cache secretion, add a secret name.

cache:

secretName: s3runner-access

Now let us see autoscaling in EKS with spot instances for runners.

Solution approach

We need two node groups:

  • One node group with on-demand instance type with one “desire instance” where runner and autoscaller will run continuously and,
  • The other node group with spot instance type, zero “desire instance” and max as per our requirement. We will give a relative tag to the node group so that autoscaler will scale up spot instances and down as per the jobs.

Create node group after EKS cluster creation with a tag of below keys and values

k8s.io/cluster-autoscaler/<cluster-name>: owned

k8s.io/cluster-autoscaler/enabled : TRUE

Creating an IAM policy for your service account will allow your CA pod to interact with the autoscaling groups.

Cat > k8s-asg-policy.json

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Action”: [

“autoscaling:DescribeAutoScalingGroups”,

“autoscaling:DescribeAutoScalingInstances”,

“autoscaling:DescribeLaunchConfigurations”,

“autoscaling:DescribeTags”,

“autoscaling:SetDesiredCapacity”,

“autoscaling:TerminateInstanceInAutoScalingGroup”,

“ec2:DescribeLaunchTemplateVersions”

],

“Resource”: “*”,

“Effect”: “Allow”

}

]

}

aws iam create-policy –policy-name k8s-asg-policy –policy-document file://k8s-asg-policy.json

Deploy the Cluster Autoscaler to your cluster with the following command:

https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Edit the yaml file “kind deployment” under the command section, give your cluster name and check the tag. It should be the same name given while creating the node group.

You can also pass other commands as required. By default, the cluster autoscaler will wait 10 minutes between scale-down operations. You can adjust this using –scale-down-delay-after-add, –scale-down-delay-after-delete, and –scale-down-delay-after-failure flag. E.g. –scale-down-delay-after-add=5m to decrease the scale-down delay to 5 minutes after a node has been added.

kubectl apply -f ./cluster-autoscaler-autodiscover.yaml

Now whenever a runner runs a job, the autoscaler will scale the node group based on the requirement and it will be zero node when there are zero jobs.

Follow us on our LinkedIn Page. To know more about our services, visit our website.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from - Youtube
Vimeo
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Spotify
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound
Contact Us