Site Map - skip to main content

Hacker Public Radio

Your ideas, projects, opinions - podcasted.

New episodes every weekday Monday through Friday.
This page was generated by The HPR Robot at


hpr3434 :: From 0 to K8s in 30 minutes

Build a Kubernetes cluster, run a website, route traffic to website

<< First, < Previous, , Latest >>

Hosted by Klaatu on Thursday, 2021-09-30 is flagged as Clean and is released under a CC-BY-SA license.
network, kubernetes, cloud. 3.
The show is available on the Internet Archive at: https://archive.org/details/hpr3434

Listen in ogg, spx, or mp3 format. Play now:

Duration: 00:32:18

Networking.

This series will try and explain the basics of networking to the listener as well as introduce more detailed topics.

Install CentOS or Debian on a Raspberry Pi. I'm using CentOS, but I'll admit that Debian is the easier option by far.

Do this on 3 separate Pi units, each with the same specs.

Set hostnames

You must have unique hostnames for each Pi. Without unique hostnames, your cluster cannot function.

There are several "kinds" of hostnames, so to avoid confusion I change all of them.

I use a simple naming scheme: k for "kubernetes" + an integer, starting at 100 + c for "cluster":

$ sudo hostname k100c
$ sudo sysctl kernel.hostname=k100c
$ sudo hostnamectl set-hostname k100c
$ sudo reboot

Do this for each Pi. At a minimum, you end up with Pi computers named k100c, k101c, and k102c.

Set verbose prompts

When working with many different hosts, it's helpful to have a very verbose prompt as a constant reminder of which host you're connected to. Add this to the ~/.bashrc of each Pi:

export PS1='\[\033[1;32m\]\! \d \t \h:\w \n% \[\033[00m\]' 

Install a Pi finder script

Install an LED blinker so you can find a specific Pi when you need one. This brilliant script is by Chris Collins for his article Use this script to find a Raspberry Pi on your network, which explains how to run it.

#!/bin/bash

set -o errexit
set -o nounset

trap quit INT TERM

COUNT=0
LED="/sys/class/leds/led0"

if ! [ $(id -u) = 0 ]; then
   echo "Must be run as root."
   exit 1
fi

if [[ ! -d $LED ]]
then
  echo "Could not find an LED at ${LED}"
  echo "Perhaps try '/sys/class/leds/ACT'?"
  exit 1
fi

function quit() {
  echo mmc0 >"${LED}/trigger"
}

echo -n "Blinking Raspberry Pi's LED - press CTRL-C to quit"
echo none >"${LED}/trigger"

while true
do
  let "COUNT=COUNT+1"
  if [[ $COUNT -lt 30 ]]
  then
    echo 1 >"${LED}/brightness"
    sleep 1
    echo 0 >"${LED}/brightness"
    sleep 1
  else
    quit
    break
  fi
done

Install K3s on your control plane

K3s is Kubernetes for IoT and Edge computing. It's the easiest, cleanest, and most serious method of getting Kubernetes on an ARM device. You can try other solutions (Microk8s, Minikube, OXD, and so on), but the best support comes from k3s.

First, you must install k3s on one Pi. You can use any of your Pi units for this, but I use host k100c because it's the first in the sequence, so it feels logical.

[k100c]$ curl -sfL https://get.k3s.io -o install_k3s.sh
[k100c]$ chmod 700 install_k3s.sh

Read the script to ensure that it seems to do what you expect, and then:

[k100c]$ ./install_k3s.sh

After installation, you're prompted to add some arguments to your bootloader. Open /boot/cmdline.txt in a text editor and add cgroup_memory=1 cgroup_enable=memory to the end of it.

console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p3 rootfstype=ext4 elevator=deadline rootwait cgroup_memory=1 cgroup_enable=memory

Reboot:

[k100c]$ sudo reboot

Once the Pi is back up, verify that your node is ready:

[k100c]$ k3s kubectl get node
NAME    STATUS  ROLES                  AGE
k100c   Ready   control-plane,master   42s

This Pi is the "control plane", meaning it's the Pi that you use to administer your cluster.

Get the node token

Obtain the control plane's node token. Thanks to k3s, this is autogenerated for you. If you not using k3s, then you must generate your own with the command kubeadm token generate.

Assuming you're using k3s:

$ MYTOKEN=$(sudo cat /var/lib/rancher/k3s/server/node-token)
$ echo $MYTOKEN
K76351a1c2497d907ba7a156028567e0ccc26b82d2174161c564152ab3add6cc3fb::server:808771e4e695e3e3465ed9a14a0581da

Add your control plane hostname to your hosts file

If you know how to manage local DNS settings, then you can use a DNS service to identify the hosts in your cluster. Otherwise, the easy way to make your nodes know how to find your control plane is to add the control plane's hostname and IP address to the /etc/hosts file on each node. This also assumes that your control plane has a static local IP address. For example, this is the host file of k101c and k102c:

127.0.0.1  localhost.localdomain localhost
::1        localhost6.localdomain6 localhost6

10.0.1.100 k100c

Verify that each host can find the control plane. For example:

[k101c]$ ping -c 1 k100c || echo "fail"
[k101c]

Add nodes to your cluster

Now you can add the other Pi computers to your cluster. On each Pi you want to turn into a computer node, install k3s with the control plane and token as environment variables. On my second Pi, for instance, I run this command:

[k101c]$ curl -sfL https://get.k3s.io | K3S_URL=https://k100c:6443 K3S_TOKEN="${MYTOKEN}" sh -

On my third and final Pi, I run the same command:

[k102c]$ curl -sfL https://get.k3s.io | K3S_URL=https://k100c:6443 K3S_TOKEN="${MYTOKEN}" sh -

Verify your cluster

On your control plane, verify that all nodes are active:

% k3s kubectl get nodes
NAME        STATUS     ROLES                  AGE     VERSION
k100c       Ready      control-plane,master   2d23h   v1.21.4+k3s1
k102c       Ready      <none>                 21h     v1.21.4+k3s1
k101c       Ready      <none>                 20h     v1.21.4+k3s1

It can take a few minutes for the control plane to discover all nodes, so wait a little while and try the command again if you don't see all nodes right away.

You now have a Kubernetes cluster running. It isn't doing anything yet, but it's a functional Kubernetes cluster. That means you have a tiny Pi-based cloud entirely at your disposal. You can use it to learn about Kubernetes, cloud architecture, cloud-native development, and so on.

Create a deployment and some pods

Now that you have a Kubernetes cluster running, you can start running applications in containers. That's what Kubernetes does: it orchestrates and manages containers. You've may have heard of containers. I did an episode about Docker containers in episode 1522 of HPR, you can go listen to that if you need to catch up. I've also done an episode on LXC in episode 371 of my own show, GNU World Order.

There's a sequence to launching containers within Kubernetes, a specific order you need to follow, because there are lots of moving parts and those parts have to reference each other. Generally, the hierarchy is this:

  • namespaces are the "project spaces" of kubernetes. I cover this in great detail in my GNU World Order episode 13x39.
  • create a deployment that manage pods.
  • pods are groups of containers. it helps your cluster scale on demand.
  • services are front-ends to deployments. A deployment can be running quietly in the background and it'll never see the light of day without a service pointing to it.
  • traffic, or exposure. A service is only available to your cluster until you expose it to the outside world with an external IP address.

First, create a namespace for your test application to use.

[k100c]$ k3s kubectl create namespace ktest

The Kubernetes project provides an example Nginx deployment definition. Read through it to get an idea of what it does. It looks something like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

This creates metadata named nginx-deployment. It also creates a label called app, and sets it to nginx. This metadata is used as selectors for pods and services later.

For now, create a deployment using the example:

[k100c]$ k3s kubectl --namespace ktest \
create -f https://k8s.io/examples/application/deployment.yaml

Confirm that the deployment has generated and started new pods:

[k100c]$ k3s kubectl --namespace ktest get all
3s kubectl --namespace ktest get all
NAME                                  READY
pod/nginx-deployment-66b[...]   1/1   Running
pod/nginx-deployment-66b[...]   1/1   Running

NAME                               READY
deployment.apps/nginx-deployment   2/2

NAME
replicaset.apps/nginx-deployment-66b6c48dd5

See the pods labelled with app: nginx:

[k100c]$ k3s kubectl --namespace ktest \
get pods -l app=nginx
NAME                                READY STATUS
nginx-deployment-66b6c48dd5-9vgg8   1/1   Running
nginx-deployment-66b6c48dd5-prgrf   1/1   Running
nginx-deployment-66b6c48dd5-cqpgf   1/1   Running

Create a service

Now you must connect the Nginx instance with a Kubernetes Service.

The selector element is set to nginx to match pods running the nginx application. Without this selector, there would be nothing to correlate your service with the pods running the application you want to serve.

[k100c]$ cat << EOF | k3s kubectl \
--namespace ktest create -f -
apiVersion: v1
kind: Service
metadata:
  name: nginx-deployment
  labels:
    run: nginx-deployment
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: nginx
EOF
service/nginx-deployment created

Verify that the service exists:

[k100c]$ k3s kubectl --namespace ktest get svc nginx-deployment
NAME       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
nginx-deployment   ClusterIP   10.43.32.89   <none>        80/TCP    58s

A Service is backed by a group of Pods. Pods are exposed through endpoints. A Service uses POST actions to populate Endpoints objects named nginx-deployment. Should a Pod die, it's removed from the endpoints, but new Pods matching the same selector are added to the endpoints. This is how Kubernetes ensures your application's uptime.

To see more information:

[k100c]$ k3s kubectl \
--namespace ktest \
describe svc nginx-deployment
Name:              nginx-deployment
Namespace:         ktest
Labels:            run=nginx-deployment
Annotations:       <none>
Selector:          app=nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.43.251.104
IPs:               10.43.251.104
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.42.2.8:80,10.42.2.9:80,10.42.3.7:80
Session Affinity:  None
Events:            <none>

Notice that the Endpoints value is set to a series of IP addresses. This confirms that instances of Nginx are accessible. The IP of the service is set to 10.43.251.104, and it's running on port 80/TCP. That means you can log onto any of your nodes (referred to as "inside the cluster") to interact with your Nginx app. This does not work from your control plane, only from a node.

[k101c]$ curl https://10.43.251.104
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body>
<h1>Welcome to nginx!</h1>
</body>
</html>

Nginx is accessible.

The only thing left to do now is to route traffic from the outside world.

Exposing a deployment

For a deployed application to be visible outside your cluster, you need to route network traffic to it. There are many tools that provide that functionality.

Install metallb:

$ k3s kubectl apply \
-f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
$ k3s kubectl apply \
-f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
$ k3s kubectl create secret generic \
-n metallb-system memberlist \
--from-literal=secretkey="$(openssl rand -base64 128)"

Determine what network range you want your cluster to use. This must not overlap with what your DHCP server is managing.

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: address-pool-0
      protocol: layer2
      addresses:
      - 10.0.1.1/26

Save this as metallb.yaml and apply the configuration:

$ k3s kubectl apply -f metallb.yaml

You now have a configmap for metallb, and metallb is running.

Create a load balance service mapping your deployment's ports (port 80 in this case, which you can verify with k3s kubectl -n ktest get all). Save this as loadbalance.yaml:

---
apiVersion: v1
kind: Service
metadata:
  name: ktest-ext
  namespace: ktest
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

This service selects any deployment in the ktest namespace with an app name of nginx, and maps the container's port 80 to a port 80 for an IP address within your address range (in my example, that's 10.0.1.1/26, or 10.0.1.1-10.0.1.62).

$k3s kubectl apply -f loadbalance.yaml

Find out what external IP address it got:

$ k3s kubectl get service ktest-ext -n ktest
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
ktest   LoadBalancer   10.43.138.91   10.0.1.3                  80:31790/TCP

Open a web browser and navigate to the external IP address listed (in this example, 10.0.1.3).

Nginx


Comments

Subscribe to the comments RSS feed.

Comment #1 posted on 2021-10-01 05:02:24 by b-yeezi

What an amazing show

I was truly impressed with this show. This could have been 2 or 3 shows. I appreciate the hard work you put into the show notes. I will be using them someday soon.

One note to other listeners - although you can install kubernetes on a Raspberry Pi 3, it's super slow, so I wouldn't recommend it.

Keep up the great work!

Comment #2 posted on 2021-10-01 17:48:20 by Mike Ray

Great show

Great show Klaatu.

Fast delivery, accurate, concise, clear, uncluttered, few verbal ticks.

Very few people can deliver a show as fast as I can think.

One of about half a dozen hosts that have me reaching for the play button instead of the delete button

Comment #3 posted on 2023-03-24 07:38:12 by Mike Ray

Built a cluster in a rack

Just went back to this well remembered show, and used it to build an eight node cluster of Pi4s in a 4U rack. One controller and seven nodes. Great show. Easy to follow.

This I have done because I now need to get to grips with OpenShift for work

Leave Comment

Note to Verbose Commenters
If you can't fit everything you want to say in the comment below then you really should record a response show instead.

Note to Spammers
All comments are moderated. All links are checked by humans. We strip out all html. Feel free to record a show about yourself, or your industry, or any other topic we may find interesting. We also check shows for spam :).

Provide feedback
Your Name/Handle:
Title:
Comment:
Anti Spam Question: What does the letter P in HPR stand for?
Are you a spammer?
Who is the host of this show?
What does HPR mean to you?