k3s Dev (2) Vagrant + VirtualBox Dev.

This post aims to help you gain a better development experience with rancher/k3s project. As an old Chinese saying implies: “A logger should always sharpen his axe before going to do his job” (工欲善其事,必先利其器). So in what follows, you may gain experiences on:

  • Adopt k3s’s Vagrantfile to launch our virtual dev environment
  • Attach the dlv process and start debugging

Special thanks to Eric@RancherLabs who helps me to get to know about his graceful solution on this. Please follow this portal to admire his other works of art: 大神の傳送門.

1. Use Vagrant

Vagrant is not something new but I thought before that it was merely a VM management tool thus overlooked at this technique due to my ignorance. Until now I realize it’s extremely useful for working at an open-source project, or other similar projects that requires people from different locations to work collaboratively. This is because it offers the developer a method to “ship” his/her working environment directly to all other co-workers, in other words, this concept of “virtual dev env” enables consistency among all developers working at the same project.

The idea of Vagrant is quite similar to Docker, for both provides certain degree of “consistency” and “isolation” in my opinion. However, they are built on top of different tech stacks (virtualization vs. container) and each has its own use-case scenario. Eric also pointed out that “Dapper is nice for building & ci but kind of a pain for development”.

To install Vagrant on MacOS, I recommend homebrew, simply issue the following commends in your terminal:

1
2
3
4
5
brew install vagrant
# or `brew cask install vagrant`

# Check available commands
vagrant -h

The following list shows some commonly used commends in my case:

  • vagrant up: spin up the boxes;
  • vagrant status: check the status of the vagrant machine;
  • vagrant ssh: ssh into the running machine;
  • vagrant halt: shutdown the running boxes;
  • vagrant destroy: delete all associated files to your configured vagrant machines.
  • (1) Vagrant supports other VM softwares all cross platforms, for me I continue to use VirtualBox here since it’s free and I have had already certain experience with it.
  • (2) Issue the destroy command will remove the files for your virtual machine saved in your VirtualBox’s settings, i.e. “/yourpathto/VirtualBox VMS/xxx”.

After learning about the basics, let’s take a look at the Vagrantfile in the k3s project root, to see what does:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
BOX = "generic/alpine310"
HOME = File.dirname(__FILE__)
PROJECT = File.basename(HOME)
MOUNT_TYPE = ENV['MOUNT_TYPE'] || "nfs"
NUM_NODES = (ENV['NUM_NODES'] || 0).to_i
NODE_CPUS = (ENV['NODE_CPUS'] || 4).to_i
NODE_MEMORY = (ENV['NODE_MEMORY'] || 8192).to_i
NETWORK_PREFIX = ENV['NETWORK_PREFIX'] || "10.135.135"
VAGRANT_PROVISION = ENV['VAGRANT_PROVISION'] || "./scripts/vagrant-provision"

# --- Rules for /etc/sudoers to avoid password entry configuring NFS:
# %admin ALL = (root) NOPASSWD: /usr/bin/sed -E -e * -ibak /etc/exports
# %admin ALL = (root) NOPASSWD: /usr/bin/tee -a /etc/exports
# %admin ALL = (root) NOPASSWD: /sbin/nfsd restart
# --- May need to add terminal to System Preferences -> Security & Privacy -> Privacy -> Full Disk Access

# --- Check for missing plugins
required_plugins = %w( vagrant-alpine vagrant-timezone )
plugin_installed = false
required_plugins.each do |plugin|
unless Vagrant.has_plugin?(plugin)
system "vagrant plugin install #{plugin}"
plugin_installed = true
end
end
# --- If new plugins installed, restart Vagrant process
if plugin_installed === true
exec "vagrant #{ARGV.join' '}"
end

provision = <<SCRIPT
# --- Use system gopath if available
export GOPATH=#{ENV['GOPATH']}
# --- Default to root user for vagrant ssh
cat <<\\EOF >/etc/profile.d/root.sh
[ $EUID -ne 0 ] && exec sudo -i
EOF
# --- Set home to current directory
cat <<\\EOF >/etc/profile.d/home.sh
export HOME="#{HOME}" && cd
EOF
. /etc/profile.d/home.sh
# --- Run vagrant provision script if available
if [ ! -x #{VAGRANT_PROVISION} ]; then
echo 'WARNING: Unable to execute provision script "#{VAGRANT_PROVISION}"'
exit
fi
echo "running '#{VAGRANT_PROVISION}'..." && \
#{VAGRANT_PROVISION} && \
echo "finished '#{VAGRANT_PROVISION}'!"
SCRIPT

Vagrant.configure("2") do |config|
config.vm.provider "virtualbox" do |v|
v.cpus = NODE_CPUS
v.memory = NODE_MEMORY
v.customize ["modifyvm", :id, "--audio", "none"]
end

config.vm.box = BOX
config.vm.hostname = PROJECT
config.vm.synced_folder ".", HOME, type: MOUNT_TYPE
config.vm.provision "shell", inline: provision
config.timezone.value = :host

config.vm.network "private_network", ip: "#{NETWORK_PREFIX}.100" if NUM_NODES==0

(1..NUM_NODES).each do |i|
config.vm.define ".#{i}" do |node|
node.vm.network "private_network", ip: "#{NETWORK_PREFIX}.#{100+i}"
node.vm.hostname = "#{PROJECT}-#{i}"
end
end
end

As we can see, mainly what it does is to pull the base box image and then prepare it by evoking the provision script and setting up the network. In particular our case, we need to:

  • (1) Set up the environment variables to bring up vagrant boxes as we want. For example:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    # how many node do we want
    export NUM_NODES=2

    # how many cpus we have for each node
    export NODE_CPUS=1

    # Also can be customized include the network, memory, etc.
    # Finally launch the boxes with the above configurations
    vagrant up

    # Check the running machines
    vagrant status

    # Connect to one of the above machines
    vagrant ssh .1
  • (2) The actually box image that has pulled down and configured is saved at: $HOME/.vagrant.d/boxes.

  • (3) To see what vagrant actually does in the ssh procedure, open another terminal then issue ps aux | grep ssh to verify it, then connect to that node again in another terminal by using that command:

    1
    ssh vagrant@127.0.0.1 -p 2222 -o LogLevel=FATAL -o Compression=yes -o DSAAuthentication=yes -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i /yourpathto/k3s/.vagrant/machines/.1/virtualbox/private_key

2. Debugging

After admiring vagrant, destroy these existing boxes and re-prepare them for enabling delve debugger by adding just one line to the scripts/vagrant-provision file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/bin/bash
set -ve

cd $(dirname $0)/..

...

# ---
mkdir -p ${GOPATH}/bin
mkdir -p /go
ln -s $GOPATH/bin /go/bin
sed ':a;N;$!ba;s/\\\n/ /g' <Dockerfile.dapper | grep '^RUN ' | sed -e 's/^RUN //' >/tmp/docker-run
export BINDIR=/go/bin
export GOPATH=/go
export HOME=/tmp && cd
. /tmp/docker-run
cd /go
go get github.com/rancher/trash
# --- Add one line here to enable delve <==
go get -u github.com/go-delve/delve/cmd/dlv
rm -rf /go
cd
# ---

...

Now ssh into the virtual machine and start debugging as we already learned in the previous post:

1
2
3
4
5
6
7
8
9
10
11
12
# After inspecting the vagrant, close and destroy it
# or stop by `vagrant halt`
vagrant destroy

# Adding dlv to the provision script then reload the vagrant
vagrant reload

# Build from the source
./scripts/download && ./scripts/build && ./scripts/package-cli

# Launch dlv
dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient exec dist/artifacts/k3s -- --debug server

Note: Use netstat you will notice some nfs related daemons, these rpc procedures are critical to maintain the consistency between the host’s source files and those to be built in virtual dev boxes. Also, to kill the dlv debugger simply kill the process from a different terminal. The most enhanced experience (at least for me) is that any modification on the source code in my host will be directly synchronized to the virtual box side via the mount, bravo!

3. Summary

In this post we have used Vagrant and Virtualbox to set up our dev environment. In my following posts, we’ll continue to dig deeper into k3s’ source code and learn more about it soon.

ㄟ(●′ω`●)ㄏ
0%