Building a Kubernetes Development Cloud with Raspberry Pi 4, Synology NAS and OpenWRT – Part 3 – Installing the Cluster

This is the third post in the series detailing how to set up a Raspberry Pi Kubernetes development cluster. In the last post we did some preparation work on the Router and the NAS. This post will cover the installation process for the Raspberry Pis. By the end of this post we will have a PXE booting Ubuntu 20.04 64bit Kubernetes cluster with iSCSI root file systems ready to start deploying workloads to.

First of all we are going to need two SD cards. One for Raspbian and one for Ubuntu. I also use Ubuntu Linux on my desktop machine, so all instructions will be done from Linux. If you are not running Linux on your machine either adapt the instructions to suit your environment or move to Linux!

Preparation

Lets start by installing the Raspberry Pi imager.

$ sudo apt-get install rpi-imager

Once it is installed launch it

For the first SD card Click “Choose OS” and then select “Other general purpose OS” then “Ubuntu” Then select the 64bit versions of “Ubuntu Server 20.04 LTS (RPI 3/4/400)”. Make sure you have your SD card plugged in and select “choose storage”. Make sure you select the SD card and then click write.

Next take your second SD card and click “Choose OS” again then “Raspberry Pi OS (Other)” and then “Raspberry Pi OS Lite (32-bit)”. Again make sure you have your SD card in and click “write”.

Once that is done you are ready to boot your first Pi. I recommend having a keyboard and mouse plugged in. It is possible to set up the SD card so SSH is enabled and you can just find it on the network and connect, but setting up PXE booting is pretty complicated and if anything goes wrong, you are going to need a monitor to see what is happening.

Plug the Ubuntu SD card into your first node and boot it up. The default user name is “ubuntu” and the default password is “ubuntu”. You will be prompted to change the password at first login. Once you are logged in we will do so initial configuration to the OS that will be required by all of the nodes.

First, on my desktop my user account is mark. So the first thing I am going to do is add a user with the same user name to make ssh and Ansible easy latter.

$ sudo adduser mark

Give your account a password and leave the other options blank. Next give the account passwordless sudo access.

sudo visudo

Go to the bottom of this file and add the line replacing “mark” with your own name.

mark ALL=(ALL) NOPASSWD:ALL

Ctrl+o to save ctrl+x to exit.

Next we need to install some missing packages, and update the system

$ sudo apt-get install nfs-common
$ sudo apt-get update 
$ sudo apt-get upgrade

Check the IP so we can ssh into this host, take note of the IP.

$ ip address

Finally, move back to your desktop machine to setup passwordless ssh access

check for existing keys so you do not overwrite any you already have

$ ls -al ~/.ssh/id_*.pub

Now create a new key

$ ssh-keygen -t rsa -b 4096 -C "your_email@domain.com"

when prompted for a password just hit enter to give it a blank password. Now copy the key to the remote host

ssh-copy-id 10.1.0.21

After this shut the node down and plug in your Rasbian SD card. All steps from here on in will need to be performed on each node. After booting each node from the Rasbian SD card, make sure you create a DHCP reservation for the node.

Node installation

First we need to update the eeprom firmware on the Pi to support PEX booting. Put in your Rasbian SD card and boot the system. Once logged in you can update the firmware like so.

$ wget https://github.com/raspberrypi/rpi-eeprom/raw/master/firmware/beta/pieeprom-2019-10-16.bin
$ rpi-eeprom-config pieeprom-2019-10-16.bin > bootconf.txt
$ sed -i s/0x1/0x21/g bootconf.txt
$ rpi-eeprom-config --out pieeprom-2019-10-16-netboot.bin --config bootconf.txt pieeprom-2019-10-16.bin
$ sudo rpi-eeprom-update -d -f ./pieeprom-2019-10-16-netboot.bin

Having flashed the firmware, we need to get the serial number for this raspberry pi. run the following command

$ cat /proc/cpuinfo

Find the serial number in the output and make a note of the last 8 digits of this code. That is the identifier for this pi. We will need this to set up the TFTP directory.

In this example the identifier is 68fe97e5

Shutdown the Pi put the Ubuntu SD card in and power it back up. We are now ready to set up the operating system for this node.

First change the hostname

$ sudo nano /etc/hostname

rdg-clust-01

Set the initiator name to be the same as the hostname. Delete anything else in the initiatorname.iscsi file.

$ sudo nano /etc/iscsi/initiatorname.iscsi

InitiatorName=rdg-clust-01

Now we can connect to the NAS ready to copy the operating system to the LUN we prepared for this node in the last post.

$ sudo iscsiadm --mode discovery --type sendtargets --portal 10.1.0.20
$ sudo iscsiadm --mode node --targetname iqn.2000-01.com.synology:rdg-strg-01.default-target.f5831cef8fc --portal 10.1.0.20 --login

If you run ls /dev/sd* you should now see that your Pi has an attached hard disk at /dev/sda

Now we need to update the initial ram file system that will be used during the boot process to include this iSCIS connection. Run the following command.

$ sudo touch /etc/iscsi/iscsi.initramfs; sudo update-initramfs -v -k $(uname -r) -c

Lets create the file system on that disk and mount it.

$ sudo mkfs.ext4 /dev/sda
$ sudo mount /dev/sda /mnt

Now that we have a file system, we need to take note of it’s ID in order to connect to it during the boot process. run the following command and note down the UUID for this node’s disk.

$ sudo blkid /dev/sda

We are now ready to copy across the OS.

$ udo  rsync -avhP --exclude /boot/firmware --exclude /proc  --exclude /sys  --exclude /dev --exclude /mnt / /mnt/
$ sudo mkdir /mnt/{dev,proc,sys,boot/firmware,mnt}

After that finishes lets update the fstab on the new disk. Replace your disk’s UUID and your node’s identifier

$ sudo nano /mnt/etc/fstab

UUID=<UUID>                              /               ext4   defaults           1 1
10.1.0.20:/volume1/tftp/<identifier>     /boot/firmware  nfs    defaults,_netdev   0 0

Next set up the USB network adaptor that will be used for the cluster backbone. Make sure you give each node a unique IP for it’s eth1 adaptor (the USB3 dongle)

$ sudo nano /mnt/etc/netplan/55-networks.yaml

network:
    ethernets:
        eth0:
          dhcp4: true
        eth1:
          addresses:
            - 10.2.0.1/24
    version: 2

Finally shutdown the node.

$ sudo umount /mnt
$ sudo iscsiadm --mode node --targetname iqn.2000-01.com.synology:rdg-strg-01.default-target.f5831cef8fc --portal 10.1.0.20 --logout
$ sudo shutdown now

Now that we have the OS ready, lets get the TFTP boot directory ready for this node. Take the Ubuntu SD card out of your Pi and plug it into your PC. Mount the system-boot partition somewhere on your PC. For me ubuntu auto-mounted it at /media/mark/system-boot/.

Next we need to copy these files to the NAS, but first we need to create a directory on the NAS for this Pi. I have a user account on my NAS named mark and ssh is enabled. Run this command from your desktop, replacing the identifier with the identifier of the node.

$ ssh mark@rdg-strg-01 "mkdir /volume1/tftp/<identifier>"

Now we need to copy the contents of boot directory to the tftp directory

$ rsync -avhP /media/mark/system-boot/ mark@rdg-strg-01:/volume1/tftp/<identifier>/

Once the files have copied over, we need to update a few settings. To do so SSH into the NAS and edit the cmdline.txt file

$ ssh mark@rdg-strg-01
$ nano /volume1/tftp/<identifier>/cmdline.txt

net.ifnames=0 dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 rootfstype=ext4 elevator=deadline rootwait fixrtc quiet splash ip=dhcp root=UUID=<DiskUUID> ISCSI_INITIATOR=iqn.2003-01.linux-iscsi:rdg-clust-01.iqn.com ISCSI_TARGET_NAME=iqn.2003-01.org.linux-iscsi.rdg-strg-01.aarch64:sn.42e8616efe86 ISCSI_TARGET_IP=10.1.0.20 ISCSI_TARGET_PORT=3260 rw cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

And finally adjust the file ___

This completes the set up of this node, you should now be able to PXE boot it without an SD card. Put the SD card in the next node and repeat the procedure until all your nodes are running.

Setting up k3s

All that remains for us to do now is to deploy kubernetes to the cluster and do some initial setup. By this point you should have all your nodes PXE booting with iSCSI root file systems, and you should be able to ssh into each node using only it’s hostname without specifying a password. Now we are going to install k3s with 3 master nodes. Before we do so though we need to set up load balancing for the kubernetes managers.

On the router go to System > Hostnames and click “Add”. Now you need to create 3 records all with the same Hostname, but with the IP addresses of the manager nodes. I have chosen to use the hostname “cluster”

If you now run nslookup the output should look something like this:

This is know as DNS Round Robin and is one way to load balance requests to a group of servers.

Now we are ready to install k3s. Run the following commands from your desktop machine replacing your hostnames.

$ ssh rdg-clust-01 "sudo curl -sfL https://get.k3s.io | sh -s server --cluster-init"

Now we need to retrieve the token so that other nodes can join the cluster. We will put it in an environment variable to make it easy for us to use.

$ KUBETOKEN=$(ssh rdg-clust-01 'sudo cat /var/lib/rancher/k3s/server/node-token')

With the token we can join our other master nodes. Take note of the IP used here. It is not the IP that we get from DHCP but the one we set statically on the USB eth1 interface. This ensures all cluster communication occurs on it’s own dedicated backbone subnet/network/VLAN

$ssh rdg-clust-02 "sudo curl -sfL http://get.k3s.io | K3S_TOKEN=${KUBETOKEN} sh -s server --server https://10.2.0.1:6443"

$ssh rdg-clust-03 "sudo curl -sfL http://get.k3s.io | K3S_TOKEN=${KUBETOKEN} sh -s server --server https://10.2.0.1:6443"

And finally join the worker nodes

$ ssh rdg-clust-04 "sudo curl -sfL http://get.k3s.io | K3S_URL=https://10.2.0.1:6443 K3S_TOKEN=${KUBETOKEN} sh -"

$ssh rdg-clust-05 "sudo curl -sfL http://get.k3s.io | K3S_URL=https://10.2.0.1:6443 K3S_TOKEN=${KUBETOKEN} sh -"

$ssh rdg-clust-06 "sudo curl -sfL http://get.k3s.io | K3S_URL=https://10.2.0.1:6443 K3S_TOKEN=${KUBETOKEN} sh -"

$ssh rdg-clust-07 "sudo curl -sfL http://get.k3s.io |K3S_URL=https://10.2.0.1:6443 K3S_TOKEN=${KUBETOKEN} sh -"

$ssh rdg-clust-08 "sudo curl -sfL http://get.k3s.io |K3S_URL=https://10.2.0.1:6443 K3S_TOKEN=${KUBETOKEN} sh -"

$ssh rdg-clust-09 "sudo curl -sfL http://get.k3s.io | K3S_URL=https://10.2.0.1:6443 K3S_TOKEN=${KUBETOKEN} sh -"

$ssh rdg-clust-10 "sudo curl -sfL http://get.k3s.io |K3S_URL=https://10.2.0.1:6443 K3S_TOKEN=${KUBETOKEN} sh -"

Next we need to be able to connect to the cluster. To do so you will need the kubectl utility installed and to retrieve the access token

$ ssh rdg-clust-01 'sudo cat /etc/rancher/k3s/k3s.yaml' > ~/.kube/config
$ sed -i 's/127.0.0.1/cluster/g' ~/.kube/config

The second line here updates the kube config file to use the load balanced hostname.

Test the cluster is working by listing the nodes:

$ kubectl get nodes

Excellent. You now have a working k3s cluster. In the next post we will configure the cluster ready to start building out our development infrastructure.

Leave a Reply

Your email address will not be published.