This post explain the setup of our kubeedge temperature demo in lab, which can be found in here. I’ve spent too much time on this due to the lack of documentation, thus I’ve documented our experience steps in details here just in case someone may find this useful.
Pre-requisites
To run this demo, a valid deployment of Kubeedge on K8s cluster is required. If you haven’t met this pre-requisite, please refer to my previous posts on “k3s+kubeedge setups” on deploying k3s (v0.10.2) and kubeedge (v1.0.0). Or, you may stay with me in this post to follow how did we setup kubeedge in release 1.1.0 starting from here.
To check whether your kubeedge cluster is functioning in the correct manner, simply do:
1 | # $KUBEEDGE={your-path-to-kubeedge} |
If everything is done correctly, the nginx deployment should be up and running and you may verify the pod’s status by kc get pod
. At this point, you may start to follow their documentation on deploying the examples, you’ll probably get stuck on some errors and couldn’t figure where goes wrong, if that’s the case for you please continue reading (o.w. you may close this page and free to leave :D).
Upgrade to Kubeedge v1.1.0
At the beginning when I failed following their documentation, I thought that was caused by the incompatibility of the versions (the demo came out several months after they release v1.0.0). So I made a decision to upgrade the kubeedge version from v1.0.0 to v1.1.0. This section states one way in which you may deploy kubeedge v1.1.0 with a valid k3s master in your own environment.
Use
kc -n kube-system get pod
to get a list of deployed k3s master pods. Refer to my previous posts to see how to disable the modules we do not need. Check the log of your coredns to ensure there is no error message (o.w. flush the iptable and kill these pods twice to solve the issues related to udp connection).
The deployment of cloudcore of kubeedge v1.1.0 is similar to v1.0.0, just remember to change all “edgecontroller” in the yaml files under build/cloud/
path to “edgecore”. Also, the creation of device/deviceModel CRDs is no longer an optional in this version, as soon as the cloudcore is up, one should immediately apply these resources before moving on:
1 | # Create CRDs: devices_v1alpha1_device.yaml & devices_v1alpha1_devicemodel.yaml |
For the edge part, we suggest deploying both edgecore and mqtt broker at bare metal:
1) To bring up a MQTT broker, simply install mosquitto and issue
mosquitto -v -p 1883
(We suggest to keep the terminal in order to verify the log infos);2) Cross-compile the edgecore and scp it to the RPi 3;
3) Copy and modify the
edge/conf
files to the Rpi 3 to match your own environment (make sure theconf/
stays at the same path with the edgecore binary);4) Launch the edgecore from terminal by
./edgecore
to keep watching the log information.
If you are not sure whether your conf/edge.yaml
file is correct, you may refer my settings in below (change wherever I’ve marked in angle brackets):
1 | mqtt: |
If the configuration is correct, you should be able to see some log information about connection between kubeedge cloudcore and edgecore, and edgecore with the mqtt broker. Generally speaking, the edgecore is responsible to get the sensor reading through MQTT subscription, and then push that data upstream to the cloudcore. So next we are about to show how to create/deploy a kubeedge mapper to collect and publish the sensor data.
Don’t forget to create the node resource via
kc create -f build/node.json
to include this newly started edge node to the cluster. If correct, one should be able to see the node in “Ready” status.
The Kubeedge Mapper
The mapper source code for this demo is contained in $GOPATH/src/github.com/kubeedge/examples/kubeedge-temperature-demo/temperature-mapper/main.go
. Since the size of this source is quite small, we suggest to obtain its source code and directly build it on your edge node for deployment. However, if you simply need it for deployment, feel free to grab mine in r5by/kubeedge-temperature-mapper:v1.0.0
for RPi 3 (o.w. you may use docker build -t <your-image-name> .
command to prepare the mapper for your own usage).
Remember, if you use
docker build
command, your result image can be only deployed on that architecture (e.g. build on RPi 3, only run on arm).
The mapper basically does two things, read sensor data from the pin and publish the readings to the mqtt broker. A detailed explanation of each modules can be found here. And the correct way to connect the sensor in the real physical world is also shown in their github repo. Do the following on your cloud side once you obtain the source codes:
1 | cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-temperature-demo/crds |
NOTE: Here, stop! I need you to verify if kubeedge indeed creates the crd instances by:
1 | # Go to the same path where you put your edgecore on your edge node, you should be able to see an `edge.db` file, |
Now, what could cause your troubles here are:
- 1) You misconfigured your crd instances. Double check your
device.yaml
file, it’s like this:
1 | apiVersion: devices.kubeedge.io/v1alpha1 |
- 2) You made some changes according to 1) but still it’s not there. This is because the kubeedge doesn’t re-apply your changes if you simply do
kc apply -f crds
. You will need to first delete these crds then re-create them to enable the writing into this database.
Once you solve the above problems, nothing shall bother you any more. Simple create this mapper and follow the rest of official documentation to read your sensor from upstream:
1 | cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-temperature-demo/ |
Pitfalls(坑) & Trouble Shooting
You may or may not come across the following problems, and I put here my solutions for your references:
- Q1: Cross-compile failed with errors: “xxx version: does not match version-control timestamp xxx”
A1: Disable go mod solves this, that is, before your build, do
export GO111MODULE=off
.
- Q2: Cross-compile failed on CentOS 7: “ xxx arm-linux-gnueabi-gcc xxx”
A2: CentOS doesn’t support gnu gcc cross compiler well, simple switch to Ubuntu 18.04 solves this issue for me (Trying to use CentOS’s gun cross compiler doesn’t work for me)
- Q3: Where to find the Kubeedge documentation to my target version?
A3: The official documentation is a mess found here. However the references found under
doc
source folder are closer to the truth…
- Q4: Any Kubeedge MQTT references?
A4: Here
- Q5: Forgot to delete the mapper deployment now it’s always automatically brought up by k3s, what shall I do?
A5: The “etcd” is replaced by sqlite (in default) for k3s. You may manually delete those registered resources in
/var/lib/rancher/k3s/server/db/state.db
if can’t delete them fromkubectl
command.
- Q6: Give me a quick Sqlite Manual
A6: Here
- Q7: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
A7: Refer to this thread; solve by rm the docker related files and restart docker.
- Q8: How to check whether MQTT broker can be reached from within another container?
A8: login to the other container and use
telnet <mqtt-ip-addr> <mqtt-port>
command.
- Q9: Understand MQTT server log info.
A9: Reference here
- Q10: What is the
qemu-user-static
that Kubeedge project uses for cross-compile from within docker?
A10: Here