Building a Kubernetes Development Cloud with Raspberry Pi 4, Synology NAS and OpenWRT – Part 1 – Introduction

In this series of articles I am going to detail how I built a private development cloud with the following key characteristics.

  • 10 node 64bit Raspberry pi 4 8GB Kubernetes cluster with PoE hats and USB3 gigabit LAN – 40 cpus and 80GB of RAM
  • Synology DS920+ with 4x 4TB 7200 disks and 8GB RAM running in RAID 10 – 7.8TB of usable space
  • Cluster nodes all PoE powered and PXE booted with iSCSI remote root file systems
  • Synology Kubernetes csi driver for dynamic kubernetes persistent volumes backed by iSCSI LUNS
  • Linksys WRT 1900 ACS router running OpenWRT 19.07 with Nginx load balancing reverse proxy
  • Netgear PoE managed gigabit switch with separate VLAN for cluster backbone

There are a lot of other guides out there on how to build a Rasperry Pi kubernetes cluster, and I owe much to them. Here are just of the few resources I have used over the last year in order to get this far.

I feel that these guides only get you so far, so in this series I want to take things a little further. Specifically dealing with storage and networking.

In terms of storage I have opted to use an iSCSI root file system for each Raspberry Pi. In this guide we will cover configuring DHCP on the router to PXE boot each Pi from a TFTP directory on the NAS. This will download an initial ram file system and kernel to the Pi which will allow it to boot and connect to the iSCSI target on the NAS. The NAS will also host NFS for mounting the boot directory after the system has booted.

Once we have Kubernetes deployed I will also cover use of the official Synology iSCSI kubernetes container storage interface driver to dynamically provision and snapshot LUNs on the Synology NAS and map them to kubernetes persistent volumes.

As for networking I have handled most of it on my OpenWRT router. The cluster is separated from the rest of my network in it’s own VLAN. Each host has a static DHCP reservation, and the router is responsible for DNS for my *.local domain. Kubernetes manager nodes are load balanced using DNS round robin, and everything else is load balanced using Nginx on the router as a reverse proxy.

The Software Estate

The purpose of this is to create a useful development environment with everything needed to deploy and monitor applications I am working on. To do this, the following software will be deployed to the cluster

  • k3s – There are other available versions of Kubernetes such as minikube, but I have had the most success with k3s
  • Synology Container storage interface – Used for creating persistent Kubernetes volumes
  • Scheduled Snapshoter – used to take regular snapshots of persistent volumes for recovery or spinning up new instances.
  • Traefik – Used for cluster ingress and comes pre-installed. We will expose the dashboard.
  • Certificate Manager – Used for managing Lets Encrypt Certificates for our pods
  • Metricbeat – Collects container/pod metrics and sends them to elasticsearch
  • Filebeat – Collects container/pod logs and sends them to elasticsearch
  • Elasticsearch – Stores logs and metrics
  • Kibana – Used for visualising log and Metric data stored in elastic
  • Node Exporter – Used for collecting host metrics which are scraped by prometheus
  • Arm Exporter – Used for collecting ARM specific host data which is scraped by prometheus
  • Elastic Exporter – Used for collecting Elasticsearch metrics which are scraped by prometheus
  • Prometheus – Used for storing metrics
  • Grafana – Used for visualising metrics data stored in elastic & prometheus
  • Alert Manager – Used for sending alerts based on prometheus metrics
  • Gitlab – Used for code & package storage and CI/CD deployment to the Kubernetes cluster
  • Gitlab Runner – Used for CI/CD
  • Postgres – Used for Gitlab and Redmine DB
  • Redmine – used for ticketing and project management

Working together this software stack gives you everything you need to begin developing on and deploying to a Raspberry Pi Kubernetes cluster. Specifically, all code stored in Gitlab running on kubernetes, and using Gitlab to deploy everything to the kubernetes cluster.

Before we get into the nitty gritty of it though, lets take a quick look at my hardware choices.

Hardware

I have gone through a number of iterations of hardware to get a configuration I am happy with both in terms of capacity and performance. This list represents my current configuration, saving you the pain and expense of all the hardware mistakes I have made getting here.

  • Linksys WRT1900 ACS router flashed with OpenWRT 19.07 and using an External 2TB USB3 SSD as an overlay file system
  • Synology DS920 with 4x Seagate Ironwolf pro 4TB 7200rpm SATA disks and an additional 4GB RAM upgrade (8GB total)
  • 10x Raspberyy Pi 4 8GB running Ubuntu 20.04 64bit with PoE Hat and additional Amazon Basics USB 3 Gigabit Ethernet adaptor for cluster backbone.
  • Netgear prosafe 24 port PoE gigabit managed switch
  • Amazon Basics USB UPS (providing power to whole cluster. i.e. The router, the NAS and the switch which in turn powers the Pis via PoE)

In short, the finished cluster has 40 cores, 80GB or ram and 4TB available storage for dynamically provisioning persistent volumes

First of all I wanted a versatile router. I chose the Linksys WRT for it’s OpenWRT support, as I knew I was going to want to do a bit more with it than just NAT. I wanted a router that ran Linux well and gave me lots of options for VLANs VNPs and other features later. The only real downside was the limited capacity of the internal storage limiting the amount of software you can install. This is easily resolved by using an external hard disk as an overlay root filesystem. On the WAN side it is connected to a Virgin Media Hub running in modem only mode with a 500Mbp/s Download and 35Mbp/s upload. The router is also configured for Dynamic DNS and is running Nginx as a load balancing reverse proxy to the cluster nodes.

Having upgraded the cluster a number of times I have gone through several network switches both with and without PoE ports. I am now using a 24 port gigabit Netgear switch which can provide power through all 24 ports. It is also a managed switch giving me more options when it comes to VLANs etc, and allows me to properly power cycle the raspberry pis. Regardless of your choice of networking equipment I recommend ensuring gigabit transfer speeds and splitting network traffic.

The storage is another area I have spent a lot of time on with various solutions. After moving to PXE booting the cluster nodes, I initially tried to re-use anything I could as a storage server with disappointing results. Then I decided to build my own NAS with another Raspberry Pi and at the time it worked well. When I needed to upgrade the disks to support more cluster nodes I began to hit issues though. I have now moved to a Synology NAS, and it was a very wise move. It has solved all of my storage issues and allowed me to deal with kubernetes storage in way I always wanted too.

The rest of the system is made up of Raspberry Pis. Having initially started with 4 nodes I am now running 10 (3 as managers). In addition to this I had an old Raspberry Pi kicking around that I have connected to a wall mounted TV in my office for displaying dashboards.

In closing

The remainder of this series of articles will guild you though how I have put all this together so that you too can build your self a Raspberry Pi Kubernetes development cluster.

Leave a Reply

Your email address will not be published. Required fields are marked *