This post explains my first impression with k3s.
Motivation
In order to work on k3s project, one needs to learn to use it at first. This post summarize the main steps of deploying k3s in our local lab environment and also on AWS EC2.
Local k3s cluster Setup
The basic hardware setup for this demo is as the same as the last post on k8s demo (3 VMs), feel free to apply the following methods with the latest released k3s on your own Raspbian nodes, it should work as we also tested it. However, the latest released version (0.9.1 at the time of this post) of k3s has some ca-related issues for an agent to join the master node (if a full ca-enabled Kubernetes cluster has been configured on these nodes), thus an earlier release version (0.2.0) was used for this section.
Step 1: Stop the previously installed k8s related services (optional)
On master node:
1 | service kube-calico stop |
On worker nodes:
1 | service kubelet stop && rm -fr /var/lib/kubelet/* |
Here I simply turn k8s off to avoid potential conflicts between k8s and k3s deployment. However this doesn’t prevent the ca issue for latest k3s releases (>v0.2.0) to work with previously installed k8s environment.
Step 2: Deploy k3s on each node
To install k3s is quite simple, we will use the following commands install and start k3s on master and two worker machines separately.
On master node:
1 | # Install k3s with rancher script |
On master node, one should see services and their port numbers: k3s : 6443/6444, 10251/10252
After installation, the k3s binary folder looks like this:
1 | . |
With out the INSTALL_K3S_BIN_DIR option, k3s will be installed at /usr/local/bin
Now, in order to join new workers to this master node, one needs first grab the token on that (master) node:
1 | # Example output: K10af00f60b1fa01b0a413e78922fd79efad2528bc4b0d19a357b5e2650d84252c5::node:f06ab2ff7068846d6b18b342f5f6a1bb |
On worker nodes:
1 | mkdir k3s |
The installation of k3s agent on worker nodes looks like this:
1 | . |
On worker node, k3s has services and ports: k3s : 42323, containerd : 10010
Now on the master node, one should be able to verify the newly added cluster resources:
1 | ## Now verify the newly added worker in the cluster |
Install k3s manually on EC2 instances
This section explains how to manually download k3s binaries from rancher’s official release and cluster-up. Here we use AWS EC2 service as following configuration:
Instance OS | Arch | IP (internal) | Instance Type | vCPU | Memory | Node Role |
---|---|---|---|---|---|---|
Ubuntu Server 18.04 LTS (HVM), SSD Volume Type | amd64(x86_64) | 172.31.46.70 | t2.medium | 2 | 4GiB | master |
Amazon Linux 2 AMI (HVM), SSD Volume Type | arm64(aarch64) | 172.31.36.129 | a1.medium | 1 | 2GiB | worker |
Remember to allow all traffic from anywhere in your security group setting.
First, let us ssh
to each running instance and prepare the k3s executable
1 | # Change to your own pem key file and instance address: |
On master node:
1 | # Start k3s server |
On worker node:
1 | # copy-and-paste token from master here |
After a little while, check the cluster info with k3s kubectl
command described in last section on the master node.
Simply kill the PID to stop k3s or k3s-agent in the demo to shut down the cluster after inspection.
Conclusion
Ranger k3s is much smaller and easier to deploy compared to Kubernetes, and it requires less effort and resources to set up. In next post, we will discuss how to set-up a development environment on k3s and dive deeper to learn k3s from its source code.