k3s Basics

This post explains my first impression with k3s.

Motivation

In order to work on k3s project, one needs to learn to use it at first. This post summarize the main steps of deploying k3s in our local lab environment and also on AWS EC2.

Local k3s cluster Setup

The basic hardware setup for this demo is as the same as the last post on k8s demo (3 VMs), feel free to apply the following methods with the latest released k3s on your own Raspbian nodes, it should work as we also tested it. However, the latest released version (0.9.1 at the time of this post) of k3s has some ca-related issues for an agent to join the master node (if a full ca-enabled Kubernetes cluster has been configured on these nodes), thus an earlier release version (0.2.0) was used for this section.

On master node:

1
2
3
4
5
service kube-calico stop
service kube-scheduler stop
service kube-controller-manager stop
service kube-apiserver stop
service etcd stop && rm -fr /var/lib/etcd/*

On worker nodes:

1
2
3
service kubelet stop && rm -fr /var/lib/kubelet/*
service kube-proxy stop && rm -fr /var/lib/kube-proxy/*
service kube-calico stop

Here I simply turn k8s off to avoid potential conflicts between k8s and k3s deployment. However this doesn’t prevent the ca issue for latest k3s releases (>v0.2.0) to work with previously installed k8s environment.

Step 2: Deploy k3s on each node

To install k3s is quite simple, we will use the following commands install and start k3s on master and two worker machines separately.

On master node:

1
2
3
4
5
6
7
8
9
10
11
12
# Install k3s with rancher script
# The script download the binaries, pull the images (containerd) and enable/start the k3s-related systemctl services
mkdir k3s

# With INSTALL_K3S_EXEC="--disable-agent" option, one may launch k3s server on the node without an agent (which may cause some issues with current release of k3s)
curl -sfL https://get.k3s.io | INSTALL_K3S_BIN_DIR="/home/main/k3s" INSTALL_K3S_VERSION="v0.2.0" sh -

# Verify the k3s system services are listening on their ports
netstat -nltp

# Verify the k3s cluster status
systemctl status k3s

On master node, one should see services and their port numbers: k3s : 6443/6444, 10251/10252

After installation, the k3s binary folder looks like this:

1
2
3
4
5
.
├── crictl -> k3s
├── k3s
├── k3s-killall.sh
└── k3s-uninstall.sh

With out the INSTALL_K3S_BIN_DIR option, k3s will be installed at /usr/local/bin

Now, in order to join new workers to this master node, one needs first grab the token on that (master) node:

1
2
# Example output: K10af00f60b1fa01b0a413e78922fd79efad2528bc4b0d19a357b5e2650d84252c5::node:f06ab2ff7068846d6b18b342f5f6a1bb
cat /var/lib/rancher/k3s/server/node-token

On worker nodes:

1
2
3
4
5
6
7
mkdir k3s

# download & active the k3s-agent service
curl -sfL https://get.k3s.io | INSTALL_K3S_BIN_DIR="/home/main/k3s" INSTALL_K3S_VERSION="v0.2.0" K3S_TOKEN="K10af00f60b1fa01b0a413e78922fd79efad2528bc4b0d19a357b5e2650d84252c5::node:f06ab2ff7068846d6b18b342f5f6a1bb" K3S_URL="https://192.168.56.103:6443" sh -

# check the service status
systemctl status k3s-agent

The installation of k3s agent on worker nodes looks like this:

1
2
3
4
5
.
├── crictl -> k3s
├── k3s
├── k3s-agent-uninstall.sh
└── k3s-killall.sh

On worker node, k3s has services and ports: k3s : 42323, containerd : 10010

Now on the master node, one should be able to verify the newly added cluster resources:

1
2
3
4
5
6
7
8
9
10
11
12
13
## Now verify the newly added worker in the cluster
k3s kubectl get nodes

## Do some deployment here...

## Clean-up
# Kill k3s services after inspection (on each node)
k3s-killall.sh

# Uninstall k3s on master
k3s-uninstall.sh
# Uninstall k3s on worker nodes
k3s-agent-uninstall.sh

Install k3s manually on EC2 instances

This section explains how to manually download k3s binaries from rancher’s official release and cluster-up. Here we use AWS EC2 service as following configuration:

Instance OS Arch IP (internal) Instance Type vCPU Memory Node Role
Ubuntu Server 18.04 LTS (HVM), SSD Volume Type amd64(x86_64) 172.31.46.70 t2.medium 2 4GiB master
Amazon Linux 2 AMI (HVM), SSD Volume Type arm64(aarch64) 172.31.36.129 a1.medium 1 2GiB worker

Remember to allow all traffic from anywhere in your security group setting.

First, let us ssh to each running instance and prepare the k3s executable

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Change to your own pem key file and instance address:
ssh -i ~/.ssh/your-key.pem ubuntu@ec2-x-x-x-x.region.compute.amazonaws.com

# Prepare download folder
mkdir k3s
cd k3s

## Download the desired release version from: https://github.com/rancher/k3s/releases?after=v0.10.0-alpha1
# On master (x86_64)
wget https://github.com/rancher/k3s/releases/download/v0.9.1/k3s
# On worker (arm)
wget https://github.com/rancher/k3s/releases/download/v0.9.1/k3s-arm64
mv k3s-arm64 k3s

# Add exec mode
chmod +x k3s

On master node:

1
2
3
4
5
# Start k3s server
./k3s server > server.log 2>&1 &

# Get token copy-and-paste the output to your worker:
echo "export node_token=$(cat /var/lib/rancher/k3s/server/node-token)"

On worker node:

1
2
3
4
5
# copy-and-paste token from master here
export node_token=...

# Start agent, pass server url and token
./k3s agent --server https://172.31.46.70:6443 --token "$node_token" >& k3s-agent.log &

After a little while, check the cluster info with k3s kubectl command described in last section on the master node.

Simply kill the PID to stop k3s or k3s-agent in the demo to shut down the cluster after inspection.

Conclusion

Ranger k3s is much smaller and easier to deploy compared to Kubernetes, and it requires less effort and resources to set up. In next post, we will discuss how to set-up a development environment on k3s and dive deeper to learn k3s from its source code.

ㄟ(●′ω`●)ㄏ
0%