In this article I will go through the setting up of a compute node in the cluster from configuring it on the network, adding storage on the NAS and configuring docker.
Although these instructions are for the Raspberry PI, they are also used for other node types
Installation of the OS
First install the required OS onto the node, either Raspberry PI OS 32 Bit (aka Raspbian) or Ubuntu Server 20.04 64 bit are the two I’m using. You can follow the standard instructions, the only addition you need to do is to ensure you have SSH configured to start on boot.
Once you have the node up and running it’s time to configure it so it’s usable within the cluster.
Networking
For normal use you don’t have to worry about networking, just plug a device into your network and the router will handle assigning addresses for you. However for a server environment you need to manage that for yourselves – mainly because you want a machine to keep the same address & not be given a new one randomly by the router. This is called Static addressing.
The process for this varies dependant on your router so I will only cover the basic’s here.
Now on my network I run both IPv4 (the older internet addressing scheme) and a globally routed IPv6 network. For IPv6 you can get a static address easy as one is usually assigned for you based on the network interfaces MAC address but for IPv4 you need to set it up.
In this example I’ll be using 192.168.1.x for the example. The network mask is /24 meaning the last 8 bits (the x) is unique to the host.
For the cluster I’ve allocated:
- 192.168.1.1 for the router
- 192.168.1.2 for the master
- 192.168.1.3 for this node
So, depending on your router, open it’s management console and there should be a section for adding static addresses, usually somewhere within the DHCP server.
I built my own router & I use VYOS on it so for me it’s a set of commands:
configure
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping node2 ip-address 192.168.1.3
set service dhcp-server shared-network-name LAN subnet 192.168.1.0/24 static-mapping node2 mac-address ab:cd:ba:12:34:56
commit
save
In either way what I’ve configured is this new node is to be called node2 it will have the IPv4 address of 192.168.1.3 and it has a mac address of ab:cd:ba:12:34:56
To get the mac address is simple, from the command line type ip addr and you should see next to each physical interface a section labelled link/ether, you want the one connected to your network. It’s usually eth0 although it can have a different label. lo is the local loopback interface and wlan0 the wifi:
pi@raspberry:~ $ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether ab:cd:ba:12:34:56 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.3/22 brd 192.168.1.100 scope global dynamic noprefixroute eth0
valid_lft 69880sec preferred_lft 59080sec
inet6 2001:db8:1234:0000:cbcd:baff:fe12:3456/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 2591999sec preferred_lft 604799sec
inet6 fe80::cbcd:baff:fe12:3456/64 scope link
valid_lft forever preferred_lft forever
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether abcd:baff:fe12:3457 brd ff:ff:ff:ff:ff:ff
Next you need to set your hostname. This is optional but I suggest you do so that the correct name appears in any shell and in any job logs.
To do this, use your favourite editor and edit /etc/hostname and replace it’s content with this nodes name – node1 in this example:
sudo vi /etc/hostname
Now reboot. Once it comes back up it should have both:
- It’s hostname set to node1
- It’s IP address set to 192.168.1.3
Configuring the NAS
Although your new node has it’s own storage (be it an SD card for the Raspberry PI or perhaps some onboard flash) the onboard storage will be hit a lot during heavy use, especially when building projects. After a while this will wear out the flash storage which can only be written to a fix number of times.
To prevent this we’ll setup a volume to point to our nas. This will be an NFS share which we’ll setup for the jenkins user.
First we’ll configure FreeNAS by logging into the web UI.
Now we need to create an account and user. If you are doing multiple nodes with the same user you only need to do this once.
Under accounts select Groups and create a new group called jenkins. Make a note of the GID, 1001 in my instance.
Next under accounts select Users and create a jenkins user and make a note of the UID – 1001 in my instance.
Next go to Sharing -> Unix Shares (NFS) and add an NFS path. I’ve got mine under /mnt/Pool1/jenkins where Pool1 is the primary ZFS pool on my NAS.
Note: When configuring this make certain the All dirs setting is checked. This allows us to mount any subdirectory of that path. This is important as we want a separate home directory for each node.
Once you have done that, go to Shell then from that shell:
cd /mnt/Pool1/jenkins
mkdir node1
chown jenkins:jenkins . node1
This creates our unique home directory for this node.
When adding further nodes it’s this one step you need to do for that new node.
Now we have the share present, we’ll configure the node to point to it, so back to the new node:
sudo su -
apt update
apt upgrade -y
apt install libnfs-utils nfs-common
useradd -g jenkins jenkins
mkdir /home/jenkins
chown jenkins:jenkins /home/jenkins
next we edit /etc/fstab to add the nfs share from the nas into the jenkins home directory
192.168.1.200:/mnt/Pool1/jenkins/node1 /home/jenkins nfs defaults 0 0
save that then type:
mount -a && df -h /home/jenkins
root@cl-006:~# df -h /home/jenkins
Filesystem Size Used Avail Use% Mounted on
192.168.1.200:/mnt/Pool1/jenkins/armv7 50G 64M 50G 1% /home/jenkins
if all goes well you should see something like that last line. The Size & Avail entries will depend on the size of the ZFS pool you allocated in FreeNAS.
Installing Docker
Next we need to install docker. For this you can follow the standard installation instructions from Docker, except you need to configure the disk first so that we use the NAS.
Now Docker doesn’t like NFS & if you try it will run like a slug as it can’t handle optimising the layers when mounting images so it defaults to copying everything first.
To get around this we need to mount a new native filesystem, either backed by NFS or iSCSI. Although using NFS can work in this way iSCSI is better as it has fewer layers between the disk & docker.
I won’t repeat the FreeNAS instructions here as they have a useful wizard to do most of the work, just select a Pool to host the iSCSI extents, use a File type of extent and set it’s size, I usually use 64.00 GiB and logical block size 512
On the new node
sudo apt install open-iscsi
next edit /etc/iscsi/iscsid.conf and uncomment the line node.session.auth.authmethod = CHAP
Whilst in there locate the following 4 lines and change the username & password to the one you setup in FreeNAS:
node.session.auth.username = freeNasUser
node.session.auth.password = freeNasPassword
discovery.sendtargets.auth.username = freeNasUser
discovery.sendtargets.auth.password = freeNasPassword
save then run the following lines:
systemctl restart iscsi
iscsiadm --mode discovery --type sendtargets --portal 192.168.1.200
replacing 192.168.1.200 with your nas IP address or hostname.
Now log into the lun (logical Unit in iscsi speak). The targetname will be showin in the FreeNAS console.
iscsiadm --mode node --targetname "iqn.2019-02.com.example.nas1:node1" --portal 192.168.1.200 --login
If you look at dmesg you should now see a new device appear, hopefully listed as /dev/sda
Now add the following line into fstab
/dev/sda1 /var/lib/docker ext3 acl,user,nofail 0 0
Finally we can partition the new drive with fdisk, format it and then mount it.
fdisk /dev/sda
mkfs.ext3 /dev/sda1
mkdir -p /var/lib/docker
mount /dev/sda1 /var/lib/docker
The last thing to do is to make iscsi mount this new partition on boot. from the iscsi login line above take the values of targetname and portal and locate the config file under /etc/iscsi/nodes.
For example in /etc/iscsi/nodes/iqn.2019-02.com.example.nas1\:node1/192.168.1.200\,3260
edit the file and find the line starting with set node.startup
and change manual to automatic.
Now if you reboot then run df you should now see a new partition under /var/lib/docker on /dev/sda1
Once you are at this stage you can now follow the instructions to install docker from the docker site.
Issues with using a hostname for the nas address
One issue with using a hostname instead of an IP address for the iscsi portal is that on boot it could try to mount the iscsi volume before the network is fully up. In this instance a quick fix is to add an entry in /etc/hosts with the nas ip address:
192.168.1.200 nas.example.com
This should then fix it. If you are using IPv6 then you can put that address in here as well.
An other issue that can happen is that docker tries to start before the underlying iscsi volume has been mounted. The quick way to fix is to edit /usr/lib/systemd/system/docker.service and add the following line to the [Unit]
section:
RequiresMountsFor=/var/lib/docker
Using Nexus as a repository mirror for Docker
This part is optional and can be setup at a later time.
By default docker will query the public docker repository for any image it doesn’t have, however it can be useful to setup an intermediate proxy so that all instances within the cluster talks to a local mirror. This is really useful if you are behind a home internet connection, you only download an image once and the local mirror then serves those images to each node as they request them.
The full instructions to set this up are in the Sonatype Nexus documentation but on each node you need to set the following in the /etc/docker/daemon.json file:
{
"registry-mirrors": ["https://docker.example.com"]
}
just change docker.example.com to your local nexus instance.
In my case it’s traefik which is setup to handle https and it then directs to the custom port on Nexus handling the docker proxy. That proxy then talks to both the mirror managed by nexus and a local repository for locally created but not public images.
In the next article I’ll cover configuring Jenkins to use this new node