k3s+kubeedge (3) Deploy Edge Core on Raspberry Pi 3

This post finalize the setup of Kubeedge on K3S cluster. The edge part of the Kubeedge connect with the API server through CloudHub in the cloud core (i.e. “edgecontroller”). We will deploy the edgecore on two Raspberry Pi 3 nodes.

Step 1. Check the current environment

Before you follow the rest of this post, please make sure you have your k3s master and kubeedge edgecontroller service up & running:

1
2
3
4
5
6
7
8
9
10
11
# Input: On master node
## Verify the k3s master
kc get node
# Output:
# aces-diamonds-ace.localdomain Ready master 116d v1.16.2-k3s.1

## Verify the edgecontroller
kce get svc
# Output:
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# edgecontroller NodePort 10.43.217.231 <none> 10000:30267/TCP,2345:32345/TCP 38m

SSH to your Raspberry Pi and stop the k3s workers (if you have them running at those edge nodes):

1
2
# Input: On Rasp Pi
./k3s-killall.sh

Step 2. Cross-compile: evil or good

The next step is to build and save edgecore image. This part is kinda messy for the Kubeedge project (吐了个槽). I’ve tried to follow the guide in their official documentation here but failed. If you also confront so many problems like I do when following the official document, I suggest to try the solution as I suggest in below.

Cross-compile is a feature offered by Kubeedge project out-of-box. However, it doesn’t work as it’s supposed to, at least for my case. First, let’s take a quick look at its README file and I’ll explain what should happen:

1
2
cd build/edge
vi README.md

This README file is probably the second worst README instructions you can possibly find all over the github (the worst stays in my own repo). After reading it, little we know about its usefulness. In fact, the script makes use of the docker-compose on setting up both build and deployment environment in mixture, even though it still provides a only_run_edge option, the README instruction mentions nothing about it. After so many trouble shootings, I give up on using the original docker-compose method and adopt the following approach.

First, since I don’t want to use Rasp Pi to build the project, I use the following commands to cross build the armv7 docker image within my x86_64 master node. The attached run_daemon.sh script provides a way of using qemu to achieve such goal. Basically what it does is to simulate a arm-based Docker (but in fact running on a amd host) to build your Dockerfile into images for arm. You probably will confront problems mentioned in #15038, #1068 and #7160 if you intend to build directly from Rasp Pi. But before you jump to those discussions, you can try to use my modified Dockerfile to replace the original one found under path build/edge/ in order to save some time:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
ARG BUILD_FROM=golang:1.12-alpine3.10
ARG RUN_FROM=docker:dind

FROM ${BUILD_FROM} AS builder

ARG QEMU_ARCH=x86_64
COPY ./build/edge/tmp/qemu-${QEMU_ARCH}-static /usr/bin/
COPY . /go/src/github.com/kubeedge/kubeedge

RUN apk --no-cache update && \
apk --no-cache upgrade && \
apk add libc-dev && \
apk add binutils-gold && \
apk --no-cache add build-base linux-headers sqlite-dev && \
CGO_ENABLED=1 go build -v -o /usr/local/bin/edge_core -ldflags="-w -s -extldflags -static" \
/go/src/github.com/kubeedge/kubeedge/edge/cmd

FROM ${RUN_FROM}

LABEL maintainer="zhanghongtong <zhanghongtong@foxmail.com>"

COPY --from=builder /usr/bin/qemu* /usr/bin/

ENV GOARCHAIUS_CONFIG_PATH /etc/kubeedge/edge
ENV database.source /var/lib/kubeedge/edge.db

VOLUME ["/etc/kubeedge/certs", "/var/lib/edged", "/var/lib/kubeedge", "/var/run/docker.sock"]

COPY --from=builder /usr/local/bin/edge_core /usr/local/bin/edge_core
COPY --from=builder /go/src/github.com/kubeedge/kubeedge/edge/conf /etc/kubeedge/edge/conf

ENTRYPOINT ["edge_core"]

The magic button to notice is apk add binutils-gold, refer to the discussions here for more details.

With the Dockerfile modified, issue the following command to build edgecore for your own arm hosts. Or, if you are also using Raspberry Pi 3, feel free to grab my pre-built images from here r5by/kubeedge_edgecore_armv7:v1.0.0.

1
2
3
4
# build the edgecore for Rasp Pi 3 (arm v7)
cd build/edge
./run_daemon.sh set arch=arm32v7 qemu_arch=arm
./run_daemon.sh build

NOTE: If you have a Rasp Pi 4 or later, you may need target arm v8. Use ./run_daemon.sh set arch=arm64v8 qemu_arch=aarch64 instead. If these parameters are not set, by default it builds image for x86_64. The configured parameters will be then written into .env file under the path.

If you are not sure whether your build image is indeed for amd or arm, simply use docker inspect <image_id> to check its architecture. Also, if you launch the images on docker with wrong architecture, you will see some error messages like “standard_init_linux.go:xxx: exec user process caused “exec format error””, etc.

Step 3. Launch edgecore

After the edgecore image is prepared, we can now launch the edgecore from the Rasp Pi. Firstly, the certificates and configuration files need to be also available from the edge nodes.

1
2
3
4
5
6
7
# On master:
tar czvf kubecert.tar /etc/kubeedge/
scp kubecert.tar <your_pi_node>

# On worker (i.e. Pi):
cd /
tar zxvf kubecert.tar

Remember to copy the run_daemon.sh file as well. Then launch the edgecore with the following command:

1
2
3
./run_daemon.sh only_run_edge cloudhub=<your_cloud_hub_ip>:<port> edgename=edge-node-pi-01 image="r5by/kubeedge_edgecore_armv7:v1.0.0"

# Verify your

Obtain the number from master node via command k3s kubectl get svc -n kubeedge as we introduced in the previous post.

If you have built your edgecore in the correct architecture and everything works, you should be able to see it’s running within docker at your edge. Switch back to your cloud master and edit a new node.yaml file to let your master detect this newly added edge node.

1
2
3
4
5
6
7
8
# cd build/edge; vi node.yaml
apiVersion: v1
kind: Node
metadata:
name: edge-node-pi-01
labels:
name: edge-node-pi-01
node-role.kubernetes.io/edge: ""

Save the yaml file and apply with command kc apply -f node.yaml. Deploy the edgecore on the second Rasp Pi node similarly, and if you have done everything correctly, you should be able to see your k3s+kubeedge cluster up and running:

1
2
3
4
5
6
kc get node

# Output:
# aces-diamonds-ace.localdomain Ready master 116d v1.16.2-k3s.1
# edge-node-pi-01 Ready edge 20m v1.10.9-kubeedge-v1.0.0
# edge-node-pi-02 Ready edge 4s v1.10.9-kubeedge-v1.0.0

Step 4. Summery

In this series of post, we have shown how to deploy Kubeedge on k3s. In the next post, I’ll jump into some interesting examples provided by Kubeedge open source project to explore its wide usages. Please leave in the comments below to let me know if you may have any trouble when following my tutorials on deploying k3s+kubeedge in your own use case. Peace!

ㄟ(●′ω`●)ㄏ
0%