Building a Kubernetes Development Cloud with Raspberry Pi 4, Synology NAS and OpenWRT – Part 7 – Installing Gitlab and the Gitlab Runner

This is the seventh article in a series covering detailing building a Raspberry Pi Kubernetes development cluster. In the first 6 parts we have covered tting the cluster ready to run applications. In this article we will deploy Gitlab and the Gitlab Runner. We will also install PostgreSQL reduce the footprint of the Gitlab pod and make it easier to run on the cluster.

PostgreSQL

Because Gitlab will use Postgres to store it’s data we will first need to install Postgres. Lets start by creating a namespace. Create a gitlab-namespace.yaml file ina new directory with the following content

apiVersion: v1
kind: Namespace
metadata:
  name: gitlab-managed-apps
  labels:
    name: gitlab-managed-apps

Apply the file to the cluster

$ kubectl apply -f gitlab-namespace.yml

Next lets set up a persistent volume for Postgres using the Synology CSI driver to automatically provision a LUN on the NAS. create postgres-pvc.yml. Create these files in a subdirectory named “postgres”

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: postgres-pv-claim
  namespace: gitlab-managed-apps
  labels:
    app: postgres
spec:
  storageClassName: synology-iscsi-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Gi

We can take regular snapshots of this volume too using the automated volume snapshotter. Add the file postgres-snapshot.yml

apiVersion: k8s.ryanorth.io/v1beta1
kind: ScheduledVolumeSnapshot
metadata:
  name: postgres-scheduled-snapshot
  namespace: gitlab-managed-apps
spec:
  snapshotClassName: synology-snapshotclass
  persistentVolumeClaimName: postgres-pv-claim
  snapshotFrequency: 1d
  snapshotRetention: 3d
  snapshotLabels:
    firstLabel: postgresSnapshot

Next we will add a config map to store the database connection details. postgres-configmap.yml

apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config
  namespace: gitlab-managed-apps
  labels:
    app: postgres
data:
  POSTGRES_DB: gitlabhq_production
  POSTGRES_USER: postgresadmin
  POSTGRES_PASSWORD: admin123

Create a service to expose the postgres deployment. postgres-service.yml

apiVersion: v1
kind: Service
metadata:
  name: postgres
  namespace: gitlab-managed-apps
  labels:
    app: postgres
spec:
  type: ClusterIP
  ports:
   - port: 5432
  selector:
   app: postgres

And finally create the deployment. postgres-deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: gitlab-managed-apps
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:latest
          imagePullPolicy: "IfNotPresent"
          ports:
            - containerPort: 5432
          envFrom:
            - configMapRef:
                name: postgres-config
          volumeMounts:
            - mountPath: /var/lib/postgresql
              name: postgredb
      volumes:
        - name: postgredb
          persistentVolumeClaim:
            claimName: postgres-pv-claim
        - name: postgres-config
          configMap:
            name: postgres-config

With all the manifests in the directory postgres, you can apply them all at once like this

$ kubectl apply -f postgres/

Gitlab

Now that we have a postgres deployment we can deploy Gitlab to the cluster. Be warned it takes quite a while for Gitlab to get up and running. Be patient.

Before we start, we are going to need a certificate that is generated by a trusted root certificate authority. If you have followed the guides so far your DNS resolution, network ingress and the cert manager deployment are all in place for this to work.

This certificate will need to be valid for 2 domains. First the domain on which Gitlab it’s self will respond and second a domain name for the container registery service.

Create a gitlab-certificate.yml file in a new gitlab directory

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: gitlab-radicalgeek
  namespace: gitlab-managed-apps
spec:
  secretName: gitlab-radicalgeek-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  renewBefore: 360h
  privateKey:
    rotationPolicy: Always
  commonName: gitlab.radicalgeek.co.uk
  dnsNames:
    - gitlab.radicalgeek.co.uk
    - reg.gitlab.radicalgeek.co.uk

Apply this file to the cluster, and wait for the certificate to generate

$ kubectl apply -f gitlab/gitlab-certificate.yml
$ kubectl get certificates -n gitlab-managed-apps

Once the certificate is ready we can create the rest of the manifests in the gitlab directory

We will start with the persistent volume claim again. Create gitlab-storage.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gitlab-config
  namespace: gitlab-managed-apps
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: synology-iscsi-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gitlab-data
  namespace: gitlab-managed-apps
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: synology-iscsi-storage

gitlab-snapshot.yml

apiVersion: k8s.ryanorth.io/v1beta1
kind: ScheduledVolumeSnapshot
metadata:
  name: gitlab-scheduled-snapshot
  namespace: gitlab-managed-apps
spec:
  snapshotClassName: synology-snapshotclass
  persistentVolumeClaimName: gitlab-data
  snapshotFrequency: 1d
  snapshotRetention: 3d
  snapshotLabels:
    firstLabel: gitlabSnapshot

Next create the gitlab-deployment.yml file (note the use of the arm56 image)

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: gitlab-managed-apps
  name: gitlab
  labels:
    app: gitlab
spec:
  replicas: 1
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app: gitlab
  template:
    metadata:
      labels:
        app: gitlab
    spec:
      containers:
        - name: gitlab
          image: yrzr/gitlab-ce-arm64v8
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: config-volume
              mountPath: /etc/gitlab
            - name: gitlab-configmap-volume
              mountPath: /etc/gitlab/gitlab.rb
              subPath: gitlab.rb
            - name: data-volume
              mountPath: /var/opt/gitlab
          ports:
            - name: http-web
              containerPort: 80
            - name: tcp-ssh
              containerPort: 22
            - name: http-reg
              containerPort: 5050
      volumes:
        - name: gitlab-configmap-volume
          configMap:
            name: gitlab-config
        - name: config-volume
          persistentVolumeClaim:
            claimName: gitlab-config
        - name: data-volume
          persistentVolumeClaim:
            claimName: gitlab-data

gitlab-service.yml

apiVersion: v1
kind: Service
metadata:
  name: gitlab
  namespace: gitlab-managed-apps
  labels:
    app: gitlab
spec:
  selector:
    app: gitlab
  ports:
    - name: http-web
      protocol: "TCP"
      port: 80
      targetPort: 80
    - name: http-reg
      protocol: "TCP"
      port: 5050
      targetPort: 5050
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  name: gitlab-ssh
  namespace: gitlab-managed-apps
  labels:
    app: gitlab-ssh
spec:
  selector:
    app: gitlab
  ports:
    - name: tcp-git
      protocol: "TCP"
      targetPort: 22
      port: 32222
      nodePort: 32222
  type: NodePort

Finally we need to create the configuration file. In this file be sure to use the username and password you set on your postgres deployment, and ensure you are using the same domain names you registered in your certificate.

apiVersion: v1
kind: ConfigMap
metadata:
  name: gitlab-config
  namespace: gitlab-managed-apps
data:
  gitlab.rb: |-
    gitlab_rails['gitlab_shell_ssh_port'] = 32222
    puma['worker_processes'] = 2
    postgresql['enable'] = false
    gitlab_rails['db_adapter'] = 'postgresql'
    gitlab_rails['db_encoding'] = 'utf8'
    gitlab_rails['db_host'] = 'postgres'
    gitlab_rails['db_port'] = 5432
    gitlab_rails['db_username'] = 'postgresadmin'
    gitlab_rails['db_password'] = 'postgresppassword'
    prometheus_monitoring['enable'] = false
    sidekiq['concurrency'] = 9
    gitlab_kas['enable'] = true
    prometheus['monitor_kubernetes'] = false
    gitlab_rails['initial_root_password'] = "PASSWORD"
    external_url 'https://gitlab.radicalgeek.co.uk'
    nginx['listen_port'] = 80
    nginx['listen_https'] = false
    nginx['proxy_set_headers'] = {
      'X-Forwarded-Proto' => 'https',
      'X-Forwarded-Ssl' => 'on'
    }
    registry_external_url 'https://reg.gitlab.radicalgeek.co.uk'
    gitlab_rails['registry_enabled'] = true
    registry_nginx['listen_port'] = 5050
    registry_nginx['listen_https'] = false
    registry_nginx['proxy_set_headers'] = {
      'X-Forwarded-Proto' => 'https',
      'X-Forwarded-Ssl' => 'on'
    }

gitlab-ingress.yml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gitlab
  namespace: gitlab-managed-apps
  labels:
    app: gitlab
  annotations:
    traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
  rules:
    - host: gitlab.radicalgeek.co.uk
      http:
        paths:
          - backend:
              serviceName: gitlab
              servicePort: 80
            path: /
    - host: reg.gitlab.radicalgeek.co.uk
      http:
        paths:
          - backend:
              serviceName: gitlab
              servicePort: 5050
            path: /
  tls:
    - hosts:
        - reg.gitlab.radicalgeek.co.uk
        - gitlab.radicalgeek.co.uk
      secretName: gitlab-radicalgeek-tls

You can now deploy gitlab to your cluster

$ kubectl apply -f gitlab/

Once the deployment goes green/ready you will still have to wait some time for Gitlab to actually become available. Once it does it will be accessible over HTTPS and publicly available. You should navigate to it right away, change the admin password and begin setting it up to your tastes.

Gitlab Runner

Now that we have Gitlab installed on the cluster, we can start to store the manifests we have created so far in source control repositories. This gives us somewhere safe to store the code that deploys the cluster. But we can also take it one step further and uses Gitlab’s built in CI/CD tooling to deploy the manifests to the cluster so that any time we commit a change to our manifests to source control, Gitlab (running on the cluster) will automatically update the cluster.

To achieve this we are going to deploy the Gitlab runner using helm, but first we need to get a few things ready. Lets start with the Helm values file. Create a file called gitlab-runner-chart-value.yml

imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.radicalgeek.co.uk
runnerRegistrationToken: "KWWDX9ekSVzLftE6imM1"
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 10
checkInterval: 30
rbac:
  create: true
  clusterWideAccess: true
  serviceAccountName: gitlab-admin
  podSecurityPolicy:
    enabled: false
    resourceNames:
    - gitlab-runner
metrics:
  enabled: true
runners:
  config: |
    [[runners]]
      [runners.kubernetes]
        host = ""
        bearer_token_overwrite_allowed = false
        image = "ubuntu:16.04"
        namespace = "gitlab-managed-apps"
        namespace_overwrite_allowed = ""
        privileged = true
        service_account_overwrite_allowed = ""
        pod_annotations_overwrite_allowed = ""
        [runners.kubernetes.pod_security_context]
        [runners.kubernetes.volumes]
  image: ubuntu:18.04
  tags: "radicalgeek-dev-cluster-runner,arm64"
  name: "radicalgeek-dev-cluster-runner"
  runUntagged: false
  privileged: true
  pollTimeout: 180
  helpers: 
    cpuLimit: 200m
    memoryLimit: 256Mi
    cpuRequests: 100m
    memoryRequests: 128Mi
    image: "gitlab/gitlab-runner-helper:arm64-${CI_RUNNER_REVISION}"
  podLabels:
    app: runner
securityContext:
  runAsUser: 100
  fsGroup: 6553
affinity: 
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
            - key: "app"
              operator: In
              values:
              - runner # same value as on r.290
        topologyKey: "kubernetes.io/hostname"

Leave a Reply

Your email address will not be published.