k3s+kubeedge (1) Code Review/Debugging Environment Setup

Kubeedge is a CNCF open source project that aims at extending the existing orchestration system (Kubernetes)’s container management capability to the edge. It provides core infrastructure support for networking, application deployment and metadata synchronization between cloud and edge along with the Kubernetes project. In this series of posts, we’ll see how to deploy Kubeedge on our existing k3s cluster and set up a code review environment where we may set up breakpoint and step-by-step inspect its functionality. This post focuses on the latter part.

For the purpose of reading code and more importantly, understanding and fixing the errors, it’s better to have a debugging environment set up first. As for our deployment, we currently have one PC server as the master host to deploy the cloud part, and two Raspberry Pi 3 as the worker nodes to deploy the edge part. The main components of Kubeedge’s cloud/edge parts can be viewed from the structure pictured in below:

image

This post is going to demonstrate how to debug the “cloud core” from within the container at deployment. For the edge core, the procedures are similar. You may also build & debug the code from the code directly, but in my case I personally like to have development environment to be close to my deployment (a.k.a in a docker/container execution environment).

Note: In the previous releases (< v1.0.0) of Kubeedge, the “cloud_core” was named as “edgecontroller” in the code, therefore we will use “edgecontroller” to refer to the “cloud core” in this post because we adopt Kubeedge v1.0.0

The following steps assumes your already have a successfully deployed & running k3s cluster (or k8s cluster). All though the master (API server) is required, the worker nodes are not.

Step 1. Preparation

First download the source to the master PC, and checkout the version 1.0.0 for later usage:

1
2
3
4
5
6
7
8
mkdir -p $GOPATH/src/github.com/kubeedge
cd $GOPATH/src/github.com/kubeedge
git clone git@github.com:kubeedge/kubeedge.git
# If you only want to compile quickly without using go mod, please set GO111MODULE=off (e.g. export GO111MODULE=off)
cd kubeedge

# Check out version 1.0.0
git checkout v1.0.0 -b dev-v1.0.0

I’ve found many issues building the code above version 1.0.0, please feel free to test on your own environment and let me know if you may succeed.

Step 2. Try to build the cloud part

Following the instructions from the Kubernetes official document, let’s try to build the cloud image.

1
make cloudimage

If everything is working as expected, you will get some message like this:

1
2
3
...
Successfully built 5f31402ab1ee
Successfully tagged kubeedge/edgecontroller:v1.0.0

However, if you take that image to deploy on the cloud server, you may get lots of trouble. In my next post, I’ll lead you through several pitfalls I encountered when I was trying to deploy it, but for now, let’s continue on our investigation on how to connect the running container to our IDE for debugging/code review.

Step 3. Rebuild the cloudimage

For the development and code review, I continue to use GoLand IDE (2019.3). The following debugging strategy is mainly inspired by this post.

First, locate the Dockerfile in the build/cloud path, and replace its content with following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
FROM golang:1.12.1-alpine3.9 AS builder

COPY . /go/src/github.com/kubeedge/kubeedge

# RUN CGO_ENABLED=0 go build -v -o /usr/local/bin/edgecontroller -ldflags="-w -s" \
# github.com/kubeedge/kubeedge/cloud/cmd

RUN CGO_ENABLED=0 go build -gcflags "all=-N -l" -v -o /usr/local/bin/edgecontroller \
github.com/kubeedge/kubeedge/cloud/cmd

# Compile Delve
RUN apk add --no-cache git
RUN go get github.com/derekparker/delve/cmd/dlv

FROM alpine:3.9

# For debug
EXPOSE 2345

ENV GOARCHAIUS_CONFIG_PATH /etc/kubeedge/cloud

# Allow delve to run on Alpine based containers.
RUN apk add --no-cache libc6-compat

VOLUME ["/etc/kubeedge/certs", "/etc/kubeedge/cloud/conf"]

COPY --from=builder /usr/local/bin/edgecontroller /usr/local/bin/edgecontroller
COPY --from=builder /go/bin/dlv /

ENTRYPOINT ["/dlv", "--listen=:2345", "--headless=true", "--api-version=2", "exec", "/usr/local/bin/edgecontroller"]

Then rebuild the cloudimage from the source project root and save that new image to your docker hub. If you may prefer to skip this step, feel free to grab mine from here r5by/kubeedge_edgecontroller_debug:v1.0.0.

1
2
3
4
5
6
7
# cd to your project root and rebuild the cloud image
build cloudimage

# login to your docker accnt and save the image
docker login -u <user_name>
docker tag kubeedge/edgecontroller:v1.0.0 <user_name>/kubeedge_edgecontroller_debug:v1.0.0
docker push <user_name>/kubeedge_edgecontroller_debug:v1.0.0

Step 4. Connect with the debugger

The final step is to connect our IDE debugging tool with the container after deployment. First cd build/cloud then modify your 07-deployment.yaml as following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: kubeedge
kubeedge: edgecontroller
name: edgecontroller
namespace: kubeedge
spec:
selector:
matchLabels:
k8s-app: kubeedge
kubeedge: edgecontroller
template:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/edgecontroller: unconfined
labels:
k8s-app: kubeedge
kubeedge: edgecontroller
spec:
initContainers:
- name: kubeconfig
image: alpine:3.9
volumeMounts:
- name: kubeconfig
mountPath: /etc/kubeedge/cloud
args:
- /bin/sh
- -c
- |
apk --update add --no-cache coreutils && cat | tee /etc/kubeedge/cloud/kubeconfig.yaml <<EOF
apiVersion: v1
kind: Config
clusters:
- name: kubeedge
cluster:
certificate-authority-data: $(cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt | base64 -w 0)
users:
- name: kubeedge
user:
token: $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
contexts:
- name: kubeedge
context:
cluster: kubeedge
user: kubeedge
current-context: kubeedge
EOF
containers:
- name: edgecontroller
image: r5by/kubeedge_edgecontroller:v1.0.0
securityContext:
capabilities:
add:
- SYS_PTRACE
imagePullPolicy: Always
ports:
- containerPort: 10000
name: cloudhub
protocol: TCP
resources:
limits:
cpu: 200m
memory: 1Gi
requests:
cpu: 100m
memory: 512Mi
volumeMounts:
- name: conf
mountPath: /etc/kubeedge/cloud/conf
- name: certs
mountPath: /etc/kubeedge/certs
- name: kubeconfig
mountPath: /etc/kubeedge/cloud
restartPolicy: Always
serviceAccount: edgecontroller
serviceAccountName: edgecontroller
volumes:
- name: conf
configMap:
name: edgecontroller
- name: certs
secret:
secretName: edgecontroller
- name: kubeconfig
emptyDir: {}

Then create (if not existing) a 08-service.yaml file and copy-paste the following content to it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Service
metadata:
name: edgecontroller
namespace: kubeedge
labels:
k8s-app: kubeedge
kubeedge: edgecontroller
spec:
type: NodePort
ports:
- name: cloudhub
port: 10000
- name: debug
port: 2345
nodePort: 32345
selector:
k8s-app: kubeedge
kubeedge: edgecontroller

Edit a shell script as following within build/cloud and execute it:

1
2
3
4
5
#!/bin/bash

for resource in $(ls *.yaml)
do k3s kubectl create -f $resource
done

Note: To use this script, you need to have a k3s master configured properly and secrets files generated. If you are not sure how to do so, please refer to my next post on how to deploy the Kubeedge cloud core then come back to follow the rest.

Verify the edgecontroller service’s up with the following commands:

1
2
3
4
5
6
7
8
9
k3s kubectl get pod -n kubeedge
# Output:
# NAME READY STATUS RESTARTS AGE
# edgecontroller-85cdc9cf8f-2p8mj 1/1 Running 0 11s

# Check its log
k3s kubectl logs edgecontroller-85cdc9cf8f-2p8mj -n kubeedge -c edgecontroller
# Output:
# API server listening at: [::]:2345

Now configure your IDE as following:

image

Click the debug button and start code walking:

image

Step 5. Trouble Shooting

It is likely that you may confront several errors until successfully connect your debugger to the container. If that happens, check the following tips to see whether it may help you out:

1) The pod was failed due to some “CrushLoopBackOff” status or stuck at “Initializing”

Check your CoreDNS service first, it’s likely that it’s not working so the pulling from public repository failed or stuck. Using the following commands to check:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 'kc' alias 'k3s kubectl'
# Verify your coredns is up&running
kc get pod --all-namespaces

# Test your dns resolver
kc run -i --tty busybox --image=busybox --restart=Never -- sh
# Within the busybox
vi /etc/resolv.conf #nameserver, etc.
nslookup www.google.com #nslookup test
# You should be able to see the server and address listed, if not, use the following commands to:
# 1) Verify each Chain of the iptable is set at "ACCEPT" flag
iptables -L | grep INPUT # check respectively INPUT/OUTPUT/FORWARD in your iptable

# 2) If the above check passed, flush the iptables with following commands to see if error messages are gone
iptables --flush
iptables -tnat --flush

# If solved, delete the busybox
kc delete pod busybox

2) If you see more errors, please refer to my next post on setting up kubeedge cloud part to see if they may be gone. Some errors may be caused by incorrect configurations/running environment.

ㄟ(●′ω`●)ㄏ
0%