Setup local kubernetes multi-node cluster with Rancher Server

Kwong Hung Yip
5 min readSep 19, 2019

--

When I first started my Kubernetes journey, I was looking for the way to set up my local development environment, usually, most people find the minikube or microk8s, both are good starting point for a single node cluster environment. Once I had learnt the basic, I was looking for further — a local multi-node cluster, a more production-like environment. In aim to do this, I found many references which turn me to the cloud, the EKS, GKS, AKS, … but before I type in my credit card number, I found the Rancher Server, and I am going to tell you how to set up my local k8s multi-node cluster.

1. Prepare the VMs for master node and worker nodes

Above pic show my cluster layout, a single master node with three worker nodes, that mininal configuration allows me try out those cross-node features such as load-balancing with ingress controller, session affinity, host affinity, …

Rancher Server does not come with the complex installation, it is packaged as a docker image and can be run as a container. The basic configuration of a node is a docker CE daeon running on a linux VM, for my case, I pick Ubuntu 18.04 LTS (please refer to node requirements from Rancher doc). When I finished the first VM, I clone it to another three, and two more tips if your are doing the same:

Finally all four nodes were spinned up on my 10 year old PC (with i5 and 24G RAM), and they are assigned with following resource.

  • master node (2 core, 4G RAM, Ubuntu 18.04 + Docker CE 18.09) x 1
  • worker node (2 core, 3G RAM, Ubuntu 18.04 + Docker CE 18.09) x 3

2. Start the Rancher server on the master node

sudo docker run \
-d --name=rancher-master \
--restart=unless-stopped \
-p 81:80 -p 444:443 \
rancher/rancher

Above command for start the Rancher server container, and I run it on the master node. By default, nginx ingress controller is embedded in the worker node and it bind to port 80 and 443, that’s why I published the rancher server to port 81 and 444 or other ports to avoid port conflict.

3. Complete the Rancher server initial setup

First, launch the Rancher server console with the master node IP address and port 444, and it will ask for the admin’s password.

Next, confirm the URL that how does the worker nodes reach the Rancher server, I simply used the master node IP address as the URL. After completed the initial setup, the Rancher server was ready for add a new cluster.

4. Create a new k8s cluster and master node

Since I want to run the k8s cluster on my local VMs but not on cloud, pick “From my own existing nodes” option on the top, and select “None” option for the Cloud provider for the new cluster.

Simply copy and run the docker command on the ubuntu VM to spin up the master node. A master node must has at least in “etcd” and “Control Plane” roles, if you plan to create a single node cluster, pick all 3 roles to change the command.

After run the docker command, the new node will be reflected on the Rancher server console, it will take a while for provisioning the node, once it is ready, the status will change to active.

5. Create worker nodes

For woker node, we only have to pick up the “Worker” role in Node Options, then we copy and run the docker command on 3 worker node Ubuntu VMs.

Finally, I got my local multi-node cluster ready on my PC.

6. Install kubectl tool to manage the new k8s cluster

The kubernetes version of the new cluster is v1.14.6 (you can found in above screenshoot), and to better align the kubectl tool version with the cluster. Run following command on the master node to install a specific version:

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.6/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

The kubectl tool requires a kubeconfig file to connect to the cluster, and the kubeconfig file for new cluster can be found in Rancher server console.

Copy above kubeconfig file and save it as file “~/.kube/config” and the kubectl tool shoud able to get the cluster info

--

--