This post covers deploying a GitLab Runner inside a Kubernetes cluster using the Kubernetes executor. Each CI job spawns as a pod, runs its tasks, and is automatically cleaned up. Docker builds use Kaniko (rootless, no privileged containers), and job artifacts/dependencies are cached in MinIO.

Architecture

┌─────────────────┐     ┌──────────────────┐     ┌─────────────────┐
│  GitLab CI Job  │────▶│  Runner Manager  │────▶│  Job Pod        │
│  (push to repo) │     │  (polycephala)   │     │  (auto-created) │
└─────────────────┘     └──────────────────┘     └─────────────────┘
                                                          │
                        ┌──────────────────┐              │
                        │  MinIO Cache     │◀─────────────┘
                        │  (shared deps)   │
                        └──────────────────┘

The runner manager pod runs continuously and polls GitLab for jobs. When a job is picked up, it creates a new pod in the gitlab-runner namespace, executes the job, and deletes the pod when complete.

Why In-Cluster?

AspectGitLab Shared RunnersIn-Cluster Runner
SpeedVariable, shared queueDedicated, no queue
NetworkExternal to clusterDirect cluster access
CostFree tier limitsUses existing resources
PrivacyCode sent externallyStays in cluster
CustomizationLimitedFull control

Prerequisites

  • Kubernetes cluster with Helm
  • MinIO for distributed cache (optional but recommended)
  • GitLab project with CI/CD enabled

Directory Structure

infrastructure/gitlab-runner/
├── .env.example              # Template for secrets
├── create-secret.sh          # Creates K8s secrets
├── gitlab-runner-values.yaml # Helm values
├── setup-gitlab-runner.sh    # Main setup script
└── README.md

Getting a Runner Token

  1. Go to GitLab project → Settings → CI/CD → Runners
  2. Click “New project runner”
  3. Select Linux, add tags (kubernetes, homelab)
  4. Click “Create runner”
  5. Copy the token (starts with glrt-)

For a group runner (covers all projects in a group), go to Group → Settings → CI/CD → Runners instead.

Configuration Files

.env.example

# GitLab Runner Configuration
RUNNER_TOKEN=glrt-xxxxxxxxxxxxxxxxxxxx

# MinIO credentials for distributed cache
CACHE_S3_ACCESS_KEY=your-minio-access-key
CACHE_S3_SECRET_KEY=your-minio-secret-key

create-secret.sh

#!/bin/bash
set -e

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"

if [ ! -f ".env" ]; then
    echo "Error: .env file not found"
    exit 1
fi

source .env

echo "Creating gitlab-runner namespace..."
kubectl create namespace gitlab-runner --dry-run=client -o yaml | kubectl apply -f -

echo "Creating runner token secret..."
kubectl create secret generic gitlab-runner-token \
    --namespace gitlab-runner \
    --from-literal=runner-registration-token="" \
    --from-literal=runner-token="$RUNNER_TOKEN" \
    --dry-run=client -o yaml | kubectl apply -f -

echo "Creating S3 cache credentials secret..."
kubectl create secret generic gitlab-runner-cache-credentials \
    --namespace gitlab-runner \
    --from-literal=accesskey="$CACHE_S3_ACCESS_KEY" \
    --from-literal=secretkey="$CACHE_S3_SECRET_KEY" \
    --dry-run=client -o yaml | kubectl apply -f -

gitlab-runner-values.yaml

# GitLab Runner Helm Values
gitlabUrl: https://gitlab.com

concurrent: 4
checkInterval: 30

rbac:
  create: true
  rules:
    - apiGroups: [""]
      resources: ["pods", "pods/exec", "pods/attach", "secrets", "configmaps"]
      verbs: ["get", "list", "watch", "create", "delete", "update"]
    - apiGroups: [""]
      resources: ["pods/log"]
      verbs: ["get", "list"]
    - apiGroups: [""]
      resources: ["services"]
      verbs: ["get", "list", "watch"]

serviceAccount:
  create: true
  name: gitlab-runner

runners:
  secret: gitlab-runner-token

  config: |
    [[runners]]
      name = "k8s-homelab-runner"
      executor = "kubernetes"
      [runners.kubernetes]
        namespace = "gitlab-runner"
        image = "alpine:latest"
        privileged = false

        cpu_limit = "2"
        cpu_request = "500m"
        memory_limit = "2Gi"
        memory_request = "512Mi"

        helper_cpu_limit = "500m"
        helper_cpu_request = "100m"
        helper_memory_limit = "256Mi"
        helper_memory_request = "64Mi"

        pull_policy = ["if-not-present"]
        service_account = "gitlab-runner"

        [runners.kubernetes.pod_labels]
          "linkerd.io/inject" = "enabled"

        [runners.kubernetes.node_selector]
          "kubernetes.io/hostname" = "polycephala"

      [runners.cache]
        Type = "s3"
        Shared = true
        [runners.cache.s3]
          ServerAddress = "minio.minio.svc.cluster.local:9000"
          BucketName = "gitlab-runner-cache"
          Insecure = true
          AccessKey = "$CACHE_S3_ACCESS_KEY"
          SecretKey = "$CACHE_S3_SECRET_KEY"

envVars:
  - name: CACHE_S3_ACCESS_KEY
    valueFrom:
      secretKeyRef:
        name: gitlab-runner-cache-credentials
        key: accesskey
  - name: CACHE_S3_SECRET_KEY
    valueFrom:
      secretKeyRef:
        name: gitlab-runner-cache-credentials
        key: secretkey

resources:
  limits:
    cpu: 200m
    memory: 256Mi
  requests:
    cpu: 100m
    memory: 128Mi

nodeSelector:
  kubernetes.io/hostname: polycephala

podAnnotations:
  linkerd.io/inject: enabled

Key settings:

SettingPurpose
concurrent: 4Maximum parallel jobs
privileged: falseNo Docker-in-Docker (use Kaniko instead)
runners.kubernetes.node_selectorPin jobs to specific node
runners.cache.Type: s3Use MinIO for shared cache
linkerd.io/inject: enabledService mesh integration

setup-gitlab-runner.sh

#!/bin/bash
set -e

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR"

echo "=== GitLab Runner Setup ==="

./create-secret.sh

echo "Enabling Linkerd injection on namespace..."
kubectl label namespace gitlab-runner linkerd.io/inject=enabled --overwrite

echo "Creating MinIO cache bucket..."
mc mb minio/gitlab-runner-cache --ignore-existing

echo "Adding GitLab Helm repository..."
helm repo add gitlab https://charts.gitlab.io
helm repo update

echo "Installing GitLab Runner..."
helm upgrade --install gitlab-runner gitlab/gitlab-runner \
    -n gitlab-runner \
    -f gitlab-runner-values.yaml \
    --wait

echo "Verifying deployment..."
kubectl wait --for=condition=ready pod -l "app=gitlab-runner" \
    -n gitlab-runner --timeout=120s

kubectl get pods -n gitlab-runner

Deployment

cd infrastructure/gitlab-runner

# Create .env from template
cp .env.example .env
# Edit .env with runner token and MinIO credentials

# Deploy
./setup-gitlab-runner.sh

Kaniko for Docker Builds

The Kubernetes executor doesn’t support Docker-in-Docker without privileged mode. Kaniko builds container images without requiring Docker daemon access.

.gitlab-ci.yml Example

image: python:3.13

stages:
  - test
  - build

pytest:
  stage: test
  tags:
    - kubernetes
  script:
    - pip install ".[test]"
    - pytest

docker-build:
  stage: build
  tags:
    - kubernetes
  image:
    name: gcr.io/kaniko-project/executor:v1.23.0-debug
    entrypoint: [""]
  before_script:
    - echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
  script:
    - |
      DESTINATIONS="--destination ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHORT_SHA}"
      DESTINATIONS="${DESTINATIONS} --destination ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}"
      if [ "$CI_COMMIT_REF_NAME" = "main" ]; then
        DESTINATIONS="${DESTINATIONS} --destination ${CI_REGISTRY_IMAGE}:latest"
      fi

      /kaniko/executor \
        --context "${CI_PROJECT_DIR}" \
        --dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
        ${DESTINATIONS} \
        --build-arg "BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" \
        --build-arg "GIT_COMMIT=${CI_COMMIT_SHA}" \
        --cache=true \
        --cache-repo="${CI_REGISTRY_IMAGE}/cache"
  rules:
    - if: $CI_COMMIT_BRANCH == "main"
    - if: $CI_COMMIT_TAG
    - if: $CI_COMMIT_BRANCH

Key points:

  • tags: [kubernetes] routes jobs to the in-cluster runner
  • Kaniko --cache=true stores layers in the registry for faster rebuilds
  • Multiple --destination flags push multiple tags in one build
  • The before_script creates Docker config for registry authentication

Enabling Runner for Multiple Projects

The runner can be shared across projects using the GitLab API:

# Get runner ID from the project where it was created
RUNNER_ID=$(glab api "projects/<project-id>/runners?type=project_type" | jq -r '.[0].id')

# Enable for another project
glab api -X POST "projects/<other-project-id>/runners" -f runner_id=$RUNNER_ID

Verification

Check Runner Status

kubectl get pods -n gitlab-runner
NAME                             READY   STATUS    RESTARTS   AGE
gitlab-runner-68cddc9b68-84r5l   2/2     Running   0          20m

Watch Job Pods

During a CI run:

kubectl get pods -n gitlab-runner -w
NAME                                                      READY   STATUS    AGE
gitlab-runner-68cddc9b68-84r5l                            2/2     Running   20m
runner-8cvaznryh-project-75488734-concurrent-0-xkxlldd4   2/2     Running   5s

Check Runner Logs

kubectl logs -n gitlab-runner -l app=gitlab-runner -c gitlab-runner --tail=20

Look for:

Job succeeded    duration_s=202.96  project=75488734  job-status=success

Gotchas

1. RBAC for pods/attach

The initial RBAC rules didn’t include pods/attach, causing job failures:

cannot create resource "pods/attach" in API group ""

The fix was adding pods/attach to the RBAC rules.

2. Single Kaniko Run

Running Kaniko twice (once for SHA tag, once for latest) fails because the build context is modified. Use multiple --destination flags in a single run instead.

3. Runner Tags

Jobs must have tags: [kubernetes] to be picked up by this runner. Without tags, jobs go to GitLab’s shared runners.

4. MinIO Cache Bucket

The cache bucket must exist before the runner can use it. The setup script creates it with mc mb.

Performance

StageDuration
pytest (Python tests)~35 seconds
docker-build (Kaniko)~3.5 minutes
Total pipeline~4 minutes

Subsequent builds are faster due to Kaniko layer caching.

Conclusion

The in-cluster GitLab Runner provides:

  • Dedicated CI resources without shared runner queues
  • Direct cluster network access for integration tests
  • Rootless Docker builds via Kaniko
  • Shared cache across jobs via MinIO
  • Full control over resource limits and node placement

The runner can be enabled for multiple projects, making it a single deployment that serves the entire GitLab namespace.