No joke, Kubernetes is a pain in the butt to set up yourself. Managed services like GKE and EKS are great if you have extra cash, but if you just want a basic cluster to mess around in (or you’re cheap like me), eventually you’ll start looking for a way to do it yourself.
After a lot of futzing around, I finally got a virtualized 3-node Kubernetes lab running on my home server. It’s not redundant, secure, or fail-safe, but I can run K8s applications on it without problem. And you can’t beat the cost. This blog will show you how to set up your own fully functioning, free Kubernetes cluster.
Step 0: Setup
First, let’s talk about my home lab. It’s a super basic Intel Core i5 system with 16 GB of RAM and a 1 TB drive. It’s running Debian 10, Docker, and KVM with virt-manager. This i5 is an old, dual-core CPU (hyperthreading brings it to 4 logical cores) which really limits the number of VMs I can run comfortably at any given time. A dual-core will work, but I recommend a quad-core or higher if you want to actually do work on your cluster.
Next, let’s talk about the Kubernetes environment. We’ll be setting up K3s, which is a lightweight Kubernetes distro provided by Rancher. It’s still fully-featured, so you can run normal K8s applications on it. We’ll create three nodes: one master, and two workers. The master will have 1 CPU and 2 GB of RAM assigned to it, and each worker will have 1 CPU and 3 GB of RAM. Due to the aforementioned i5 limitation, each of my workers only has one CPU, but ideally they would have two or more.
All three nodes run Ubuntu 18.04. You can theoretically use any version of Linux, but when using Debian or Ubuntu 20.04, some of my test Kubernetes applications didn’t work properly. I never found out the underlying cause, but reverting to Ubuntu 18.04 was just easier.
Lastly, you can either use Ubuntu Desktop or Ubuntu Server depending on your preference. There’s a bit of copying and pasting when setting up K3s, which is easier on a desktop. But if you don’t mind typing long(ish) commands or setting up SSH, I recommend Ubuntu Server.
Networking
An important note about networking: for security reasons, this cluster should ideally live on its own network or subnet, but we need to at least be able to connect to the master to issue commands to the Kubernetes API and access services. You can create a new Virtual Network in virt-manager, set each VM’s default NIC to that network, then add a second NIC on the master node that’s bridged to your network. This way, you can connect to the master directly, but the workers are hidden in a separate network.
Alternatively, you can take the easier route and bridge every VM to your network. It’s less secure, but also easier to set up. Here’s what the bridged adapter looks like in virt-manager:
Step 1: Install K3s
K3s makes it super simple to install Kubernetes on both the master node and worker nodes. To attach a worker, all you need is the master’s IP address and a public key generated by K3s during the install process. K3s takes care of networking, installing Kubernetes, and generating startup scripts.
A note on container runtimes: By default, K3s will install containerd as the container runtime. If you want to use Docker, you’ll need to install Docker onto each of your nodes before continuing. I’ll include instructions for using Docker during each installation process.
Setting up the master
First, log into your master node and run the following command:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -
Note: if you’re using Docker, add --docker
to the INSTALL_K3S_EXEC
parameter. Also, if your node has multiple network IP addresses, you can specify which IP address Kubernetes should advertise on by adding --node-ip=[your node's IP address]
to the INSTALL_K3S_EXEC
parameter.
This automatically starts K3s in the background and generates a secure token, which you’ll need to connect each of the workers. To get the token, run:
sudo cat /var/lib/rancher/k3s/server/node-token
Copy this string along with the master’s IP address.
Setting up the workers
Next, log into one of your workers and run the following command. Make sure to replace [master IP address]
and [master token]
with the actual IP address and token you pulled from the master node:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent" K3S_URL=https://[master IP address] K3S_TOKEN=[master token] sh -
That’s it! You can verify that your node was added by going back to the master node and running kubectl get nodes
. You should see two: the master, plus the new worker:
NAME STATUS ROLES AGE VERSION
k8s-worker-1 Ready <none> 3d4h v1.18.8+k3s1
k8s-master Ready master 3d4h v1.18.8+k3s1
Repeat this step on your second worker node. This is also a great time to take snapshots of all three VMs so you can easily revert to a stable state in case something goes wrong.
Step 2: Configure kubectl
Kubectl is the Kubernetes command-line client, which you’ll use to manage and deploy applications to your cluster. You can use kubectl on any computer with network access to your cluster, including on the master and worker VMs. Having to log into the master VM each time you want to interact with the cluster is a pain. Instead, we’ll extract the kubeconfig file from the master so you can use it on your workstation instead.
Kubectl is configured using kubeconfig files. These store configuration details about clusters such as their network addresses and login credentials. K3s automatically generates a kubeconfig file for the master. First, log into the master VM. In a terminal, run the following command:
cat /etc/rancher/k3s/k3s.yaml
This is the kubeconfig file generated and used by K3s. Copy this to a text editor, as we’ll need to tweak it a bit first. Find the line starting with server
and change the IP address to match the master’s external IP address. Save the file to your local computer as $HOME/.kube/config
. Next, download kubectl and run the following command:
kubectl get nodes
If all went well, you should see your master and worker nodes listed. Congratulations! You now have a fully functioning Kubernetes cluster!
If you’re new to Kubernetes, try following the Kubernetes basics tutorial. If you want a demo app to play around with, try running the Online Boutique e-commerce shop, or try deploying WordPress. Don’t worry about messing up your cluster – that’s half the fun! Note what went wrong, restore from a snapshot, and try again.