Kubeedge Examples (Temperature Sensor Demo)

This post explain the setup of our kubeedge temperature demo in lab, which can be found in here. I’ve spent too much time on this due to the lack of documentation, thus I’ve documented our experience steps in details here just in case someone may find this useful.

Pre-requisites

To run this demo, a valid deployment of Kubeedge on K8s cluster is required. If you haven’t met this pre-requisite, please refer to my previous posts on “k3s+kubeedge setups” on deploying k3s (v0.10.2) and kubeedge (v1.0.0). Or, you may stay with me in this post to follow how did we setup kubeedge in release 1.1.0 starting from here.

To check whether your kubeedge cluster is functioning in the correct manner, simply do:

1
2
3
4
5
# $KUBEEDGE={your-path-to-kubeedge}
cd $KUBEEDGE/build

# change deployment.yaml to deployment-armv7.yaml if your edge node is on the RPi 3
kc apply -f deployment.yaml

If everything is done correctly, the nginx deployment should be up and running and you may verify the pod’s status by kc get pod. At this point, you may start to follow their documentation on deploying the examples, you’ll probably get stuck on some errors and couldn’t figure where goes wrong, if that’s the case for you please continue reading (o.w. you may close this page and free to leave :D).

Upgrade to Kubeedge v1.1.0

At the beginning when I failed following their documentation, I thought that was caused by the incompatibility of the versions (the demo came out several months after they release v1.0.0). So I made a decision to upgrade the kubeedge version from v1.0.0 to v1.1.0. This section states one way in which you may deploy kubeedge v1.1.0 with a valid k3s master in your own environment.

Use kc -n kube-system get pod to get a list of deployed k3s master pods. Refer to my previous posts to see how to disable the modules we do not need. Check the log of your coredns to ensure there is no error message (o.w. flush the iptable and kill these pods twice to solve the issues related to udp connection).

The deployment of cloudcore of kubeedge v1.1.0 is similar to v1.0.0, just remember to change all “edgecontroller” in the yaml files under build/cloud/ path to “edgecore”. Also, the creation of device/deviceModel CRDs is no longer an optional in this version, as soon as the cloudcore is up, one should immediately apply these resources before moving on:

1
2
3
# Create CRDs: devices_v1alpha1_device.yaml & devices_v1alpha1_devicemodel.yaml
# A quick reference can be found here: https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/device-crd.md
kc create -f build/crds/devices

For the edge part, we suggest deploying both edgecore and mqtt broker at bare metal:

  • 1) To bring up a MQTT broker, simply install mosquitto and issue mosquitto -v -p 1883 (We suggest to keep the terminal in order to verify the log infos);

  • 2) Cross-compile the edgecore and scp it to the RPi 3;

  • 3) Copy and modify the edge/conf files to the Rpi 3 to match your own environment (make sure the conf/ stays at the same path with the edgecore binary);

  • 4) Launch the edgecore from terminal by ./edgecore to keep watching the log information.

If you are not sure whether your conf/edge.yaml file is correct, you may refer my settings in below (change wherever I’ve marked in angle brackets):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
mqtt:
server: tcp://<mqtt-server-ip>:1883 # external mqtt broker url.
internal-server: tcp://127.0.0.1:1884 # internal mqtt broker url.
mode: 2 # 0: internal mqtt broker enable only. 1: internal and external mqtt broker enable. 2: external mqtt broker enable only.
qos: 0 # 0: QOSAtMostOnce, 1: QOSAtLeastOnce, 2: QOSExactlyOnce.
retain: false # if the flag set true, server will store the message and can be delivered to future subscribers.
session-queue-size: 100 # A size of how many sessions will be handled. default to 100.

edgehub:
websocket:
url: wss://<cloudcore-server-ip>:<port>/e632aba927ea4ac2b575ec1603d56f10/<edge-node-id>/events
certfile: /etc/kubeedge/certs/edge.crt
keyfile: /etc/kubeedge/certs/edge.key
handshake-timeout: 30 #second
write-deadline: 15 # second
read-deadline: 15 # second
quic:
url: <cloudcore-server-ip>:10001
cafile: /etc/kubeedge/ca/rootCA.crt
certfile: /etc/kubeedge/certs/edge.crt
keyfile: /etc/kubeedge/certs/edge.key
handshake-timeout: 30 #second
write-deadline: 15 # second
read-deadline: 15 # second
controller:
protocol: websocket # websocket, quic
heartbeat: 15 # second
project-id: e632aba927ea4ac2b575ec1603d56f10
node-id: <edge-node-id>

edged:
register-node-namespace: default
hostname-override: <edge-node-id>
interface-name: <edge-node-net-interface>
edged-memory-capacity-bytes: 7852396000
node-status-update-frequency: 10 # second
device-plugin-enabled: false
gpu-plugin-enabled: false
image-gc-high-threshold: 80 # percent
image-gc-low-threshold: 40 # percent
maximum-dead-containers-per-container: 1
docker-address: unix:///var/run/docker.sock
runtime-type: docker
remote-runtime-endpoint: unix:///var/run/dockershim.sock
remote-image-endpoint: unix:///var/run/dockershim.sock
runtime-request-timeout: 2
podsandbox-image: <select-right-one-from-following-comments> # kubeedge/pause:3.1 for x86 arch , kubeedge/pause-arm:3.1 for arm arch, kubeedge/pause-arm64 for arm64 arch
image-pull-progress-deadline: 60 # second
cgroup-driver: cgroupfs # NOTE: Need to be consistent with your docker cgroup driver, o.w. the node status will always be "NotReady"
node-ip: ""
cluster-dns: ""
cluster-domain: ""

mesh:
loadbalance:
strategy-name: RoundRobin

If the configuration is correct, you should be able to see some log information about connection between kubeedge cloudcore and edgecore, and edgecore with the mqtt broker. Generally speaking, the edgecore is responsible to get the sensor reading through MQTT subscription, and then push that data upstream to the cloudcore. So next we are about to show how to create/deploy a kubeedge mapper to collect and publish the sensor data.

Don’t forget to create the node resource via kc create -f build/node.json to include this newly started edge node to the cluster. If correct, one should be able to see the node in “Ready” status.

The Kubeedge Mapper

The mapper source code for this demo is contained in $GOPATH/src/github.com/kubeedge/examples/kubeedge-temperature-demo/temperature-mapper/main.go. Since the size of this source is quite small, we suggest to obtain its source code and directly build it on your edge node for deployment. However, if you simply need it for deployment, feel free to grab mine in r5by/kubeedge-temperature-mapper:v1.0.0 for RPi 3 (o.w. you may use docker build -t <your-image-name> . command to prepare the mapper for your own usage).

Remember, if you use docker build command, your result image can be only deployed on that architecture (e.g. build on RPi 3, only run on arm).

The mapper basically does two things, read sensor data from the pin and publish the readings to the mqtt broker. A detailed explanation of each modules can be found here. And the correct way to connect the sensor in the real physical world is also shown in their github repo. Do the following on your cloud side once you obtain the source codes:

1
2
3
4
cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-temperature-demo/crds

kubectl apply -f devicemodel.yaml
kubectl apply -f device.yaml

NOTE: Here, stop! I need you to verify if kubeedge indeed creates the crd instances by:

1
2
3
4
5
6
7
8
9
# Go to the same path where you put your edgecore on your edge node, you should be able to see an `edge.db` file,
sqlite3 edge.db

# Inside the sqlite CLI interface:
> .table # you should see several tables including devices
> .header on
> .mode column
> read * from devices; # if nothing is listed here, you fail!
> .exit # quit sqlite after verification

Now, what could cause your troubles here are:

  • 1) You misconfigured your crd instances. Double check your device.yaml file, it’s like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: devices.kubeedge.io/v1alpha1
kind: Device
metadata:
name: temperature
labels:
description: 'temperature'
manufacturer: 'test'
spec:
deviceModelRef:
name: temperature-model
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: ''
operator: In
values:
- <your-node-id> # NOTE here, this should be your node id other than its label!!
status:
twins:
- propertyName: temperature-status
desired:
metadata:
type: string
value: ''
  • 2) You made some changes according to 1) but still it’s not there. This is because the kubeedge doesn’t re-apply your changes if you simply do kc apply -f crds. You will need to first delete these crds then re-create them to enable the writing into this database.

Once you solve the above problems, nothing shall bother you any more. Simple create this mapper and follow the rest of official documentation to read your sensor from upstream:

1
2
3
4
5
6
7
8
9
10
11
cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-temperature-demo/

# Please enter the following details in the deployment.yaml :-
# 1. ~Replace <edge_node_name> with the name of your edge node at spec.template.spec.nodeSelector.name~
# NOTE: <edge_node_name> here actually means the node's label, not its name!!
# 2. Replace <your_image> at spec.template.spec.containers.image

kc create -f deployment.yaml

# The mapper will report back the temperature to cloud after updating. Observe the temperature in the cloud side.
kc get device temperature -oyaml -w

Pitfalls(坑) & Trouble Shooting

You may or may not come across the following problems, and I put here my solutions for your references:

  • Q1: Cross-compile failed with errors: “xxx version: does not match version-control timestamp xxx”

A1: Disable go mod solves this, that is, before your build, do export GO111MODULE=off.

  • Q2: Cross-compile failed on CentOS 7: “ xxx arm-linux-gnueabi-gcc xxx”

A2: CentOS doesn’t support gnu gcc cross compiler well, simple switch to Ubuntu 18.04 solves this issue for me (Trying to use CentOS’s gun cross compiler doesn’t work for me)

  • Q3: Where to find the Kubeedge documentation to my target version?

A3: The official documentation is a mess found here. However the references found under doc source folder are closer to the truth…

  • Q4: Any Kubeedge MQTT references?

A4: Here

  • Q5: Forgot to delete the mapper deployment now it’s always automatically brought up by k3s, what shall I do?

A5: The “etcd” is replaced by sqlite (in default) for k3s. You may manually delete those registered resources in /var/lib/rancher/k3s/server/db/state.db if can’t delete them from kubectl command.

  • Q6: Give me a quick Sqlite Manual

A6: Here

  • Q7: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

A7: Refer to this thread; solve by rm the docker related files and restart docker.

  • Q8: How to check whether MQTT broker can be reached from within another container?

A8: login to the other container and use telnet <mqtt-ip-addr> <mqtt-port> command.

  • Q9: Understand MQTT server log info.

A9: Reference here

  • Q10: What is the qemu-user-static that Kubeedge project uses for cross-compile from within docker?

A10: Here

ㄟ(●′ω`●)ㄏ
0%