Before I started this project I had various hosts setup at various providers as well as at home.
These handled varied services including:
- Jenkins for performing CI builds of all of my opensource projects with separate build servers for amd64, arm7 & arm8 cpu architectures
- Sonatype Nexus for managing build artifacts
- Geoserver handling dynamic mapping for my mapping site
- PostgreSQL instances for various online & offline databases
- Live feeds from Network Rail, National Rail Enquiries (Rail Delivery Group), USGS (United States Geological Survey) and others
- Archiving the live feeds
Now with one of the providers removing support & soon the actual nodes for some of their server types which hosts Jenkins, Nexus, Geoserver & the amd64/arm8 build severs I needed to find suitable replacements.
At home I already had a large collection of Raspberry PI’s some dating from 2012 so I had a reasonable collection of machines for adding to the cluster so all I needed was to add a shared filesystem, support for AMD64 builds & racking to house it all – previously I had some in cases, others without & it was a mess…
What I had on hand for this project was:
- Old Laptop with SSD, 16Gb ram, AMD64 processor (Intel i3-3217U CPU @ 1.80GHz)
- Couple of old HP microservers (AMD Turion(tm) II Neo N54L) partially working
- Numerous Raspberry PI’s of every version from the original 2012 model B
Network Attached Storage – NAS
The old HP microservers were cannibalised to build a working single node with new 2Tb SATA drives installed & maxed out at 8GB ram. This is running FreeNAS and provides shared disk space for the cluster.
Later I might see if I can repair the other one although having one working is fine for now.
Old Laptop – Nexus & GeoServer
The old Laptop is still perfectly usable as a laptop (it was my main development environment for a long time), it is old (2013 vintage) so it’s battery isn’t that great so it lives permanently online so it’s ideal for being reused for the cluster.
Sonatype Nexus is only supported on AMD64 but I already had an instance running on the laptop acting as a proxy (which was useful in the past when commuting on the train) so that already existed.
With the amount of ram I have on the laptop it also makes sense to house Geoserver on that machine.
For services which require HTTPS I’ve also got Traefik installed which then acts as a proxy providing https support.
Jenkins on the other hand isn’t on that box… I had decided early on that I wanted it inside the actual cluster.
Cluster Manager – Jenkins & Rundeck
The cluster Manager is the node who’s job is to manage the entire cluster, issuing jobs to the various nodes.
This node is a Raspberry PI 4B with 8GB of ram running Ubuntu 20.04 64bit. To provide additional space from the onboard SD card to store jobs & their workspaces it has a 1Tb USB2 hard drive attached to it. I did this not just to keep the network traffic to the NAS down but I also had one spare. It’s outer case was damaged so I removed that & created a new tray for the racking to house it.
It’s running docker so that any services it needs are installed as images, currently Jenkins but Rundeck & PostgreSQL will be going on this node, the latter for use by the master only.
The compute nodes are where the work is going to be done. The aim is that jobs will be assigned a node and that node (or more if necessary) runs the job automatically. The assignment based on which one is free, not loaded as much as others or based on the CPU architecture. All of these nodes are running docker so that I don’t have to install software on them, they can download it as an image as required using Nexus as a local proxy server.
The only other requirement is that they run on multiple architectures, specifically:
- AMD64 (i.e. standard Intel 64 bit)
- ARMv7 (32 bit)
- ARMv8 (64 bit)
This requirement is due to the fact that I have for some time now built my open sourced docker images on all three architectures. Those images are all build as multi-arch so if you want to run one of them on either an Intel based machine or a Raspberry PI the image name is the same – docker just picks the required image layers for the cpu automatically.
For these nodes I’m using Raspberry PI 4B’s for the ARM architectures and Intel Atom based machines for AMD64.
The Raspberry 4B’s supports both ARM architectures depending on which Kernel you are installed.
For ARMv7 I have some nodes running Raspberry PI OS 32 Bit (formerly Raspbian). As these have a 32 bit kernel it reports the cpu as being an ARMv7 and docker will use those specific images.
For ARMv8 I’m running Ubuntu Server 20.04 64Bit. Ubuntu provides a specific lite image which is just as quick to install as Raspberry PI OS is – i.e. just write it to the SD card & boot it up.
For AMD64 I needed something small enough to fit the racking – primarily a machine that was a maximum of 90mm high (when mounted vertically). For these I’m using some cheap Chinese mini PC’s. These are set-top box style machines using Intel Atom Z8350 quad-core cpu’s and 4Gb ram.
The cluster console is a Raspberry PI 0W with a 3.5 inch screen. It connects over WiFi to the network but with a wireless keyboard it provides a means of accessing node consoles or displaying cluster status information on a separate screen to the rest of the cluster.
There are a few nodes that are in the racking but not directly associated with the cluster:
- Raspberry PI 2B running OctoPI – used to manage one of my 3D printers. The printer it manages is next to the racking so a short USB connection away
- Raspberry PI 3B running my PBX (telephone system) & print server with a Thermal Receipt printer attached
- Raspberry PI 3B+ running the network DNS & VPN server
- Raspberry PI B+ running a RabbitMQ instance for use by the cluster
There’s also a few other B+ going spare which I might add to the cluster for some support services.
Back to Setting up a Compute Cluster