Kubeedge is a CNCF open source project that aims at extending the existing orchestration system (Kubernetes)’s container management capability to the edge. It provides core infrastructure support for networking, application deployment and metadata synchronization between cloud and edge along with the Kubernetes project. In this series of posts, we’ll see how to deploy Kubeedge on our existing k3s cluster and set up a code review environment where we may set up breakpoint and step-by-step inspect its functionality. This post focuses on the latter part.
For the purpose of reading code and more importantly, understanding and fixing the errors, it’s better to have a debugging environment set up first. As for our deployment, we currently have one PC server as the master host to deploy the cloud part, and two Raspberry Pi 3 as the worker nodes to deploy the edge part. The main components of Kubeedge’s cloud/edge parts can be viewed from the structure pictured in below:
This post is going to demonstrate how to debug the “cloud core” from within the container at deployment. For the edge core, the procedures are similar. You may also build & debug the code from the code directly, but in my case I personally like to have development environment to be close to my deployment (a.k.a in a docker/container execution environment).
Note: In the previous releases (< v1.0.0) of Kubeedge, the “cloud_core” was named as “edgecontroller” in the code, therefore we will use “edgecontroller” to refer to the “cloud core” in this post because we adopt Kubeedge v1.0.0
The following steps assumes your already have a successfully deployed & running k3s cluster (or k8s cluster). All though the master (API server) is required, the worker nodes are not.
Step 1. Preparation
First download the source to the master PC, and checkout the version 1.0.0 for later usage:
1 | mkdir -p $GOPATH/src/github.com/kubeedge |
I’ve found many issues building the code above version 1.0.0, please feel free to test on your own environment and let me know if you may succeed.
Step 2. Try to build the cloud part
Following the instructions from the Kubernetes official document, let’s try to build the cloud image.
1 | make cloudimage |
If everything is working as expected, you will get some message like this:
1 | ... |
However, if you take that image to deploy on the cloud server, you may get lots of trouble. In my next post, I’ll lead you through several pitfalls I encountered when I was trying to deploy it, but for now, let’s continue on our investigation on how to connect the running container to our IDE for debugging/code review.
Step 3. Rebuild the cloudimage
For the development and code review, I continue to use GoLand IDE (2019.3). The following debugging strategy is mainly inspired by this post.
First, locate the Dockerfile in the build/cloud
path, and replace its content with following:
1 | FROM golang:1.12.1-alpine3.9 AS builder |
Then rebuild the cloudimage from the source project root and save that new image to your docker hub. If you may prefer to skip this step, feel free to grab mine from here r5by/kubeedge_edgecontroller_debug:v1.0.0
.
1 | # cd to your project root and rebuild the cloud image |
Step 4. Connect with the debugger
The final step is to connect our IDE debugging tool with the container after deployment. First cd build/cloud
then modify your 07-deployment.yaml
as following:
1 | apiVersion: apps/v1 |
Then create (if not existing) a 08-service.yaml
file and copy-paste the following content to it:
1 | apiVersion: v1 |
Edit a shell script as following within build/cloud
and execute it:
1 |
|
Note: To use this script, you need to have a k3s master configured properly and secrets files generated. If you are not sure how to do so, please refer to my next post on how to deploy the Kubeedge cloud core then come back to follow the rest.
Verify the edgecontroller service’s up with the following commands:
1 | k3s kubectl get pod -n kubeedge |
Now configure your IDE as following:
Click the debug button and start code walking:
Step 5. Trouble Shooting
It is likely that you may confront several errors until successfully connect your debugger to the container. If that happens, check the following tips to see whether it may help you out:
1) The pod was failed due to some “CrushLoopBackOff” status or stuck at “Initializing”
Check your CoreDNS service first, it’s likely that it’s not working so the pulling from public repository failed or stuck. Using the following commands to check:
1 | # 'kc' alias 'k3s kubectl' |
2) If you see more errors, please refer to my next post on setting up kubeedge cloud part to see if they may be gone. Some errors may be caused by incorrect configurations/running environment.