Docker Linux

Setting up a Kubernetes cluster with K3S, GlusterFS and Load Balancing

This tutorial will guide you through setting up a Kubernetes cluster using K3S with virtual machines hosted at Hetzner, a German (Cloud) hosting provider. K3S is a lightweight Kubernetes distribution which is perfectly suited for small VMs like Hetzner’s CX11. Additionally, you will set up Hetzner’s cloud load balancer which performs SSL offloading and forwards traffic to your Kubernetes system. Optionally, you will learn how to set up a distributed, replicated file system using Kadalu, an opinionated storage system based on GlusterFS. This allows you to move pods between the nodes while still having access to the pods’ persistent data.


This tutorial assumes you have set up Hetzner’s CLI utility (hcloud) which has access to a Hetzner cloud project in your account.

The following terminology is used in this tutorial:

  • Domain: <>
  • SSH Key: <your_ssh_key>
  • Random secret token: <your_secret_token>
  • Hetzner API token: <hetzner_api_token>
  • IP addresses (IPv4):
    • K3S Master:
    • K3S Node 1:
    • K3S Node 2:
    • Hetzner Cloud Load Balancer:

Step 1 – Create private network

First, we’ll create a private network which is used by our Kubernetes nodes for communicating with each other. We’ll use as network and subnet.

hcloud network create --name network-kubernetes --ip-range
hcloud network add-subnet network-kubernetes --network-zone eu-central --type server --ip-range

Step 2 – Create placement group and servers

Next, we’ll create a “spread” placement group for our servers and then the VMs.

Step 2.1 – Create the spread placement group (optional)

The placement group ensures your VMs run on different hosts, so in case on host has a failure, no other VMs are affected.

hcloud placement-group create --name group-spread --type spread

Step 2.2 – Create the virtual machines

At the time of writing, there seems to be a bug with Debian 11 which prevents K3S from using VXLAN to communicate between the nodes. We’ll use Ubuntu 20.04 LTS instead. If you want to use VXLAN with Debian, please check if the bug still exists and use Debian 10 if it does.

hcloud server create --datacenter nbg1-dc3 --type cx11 --name master-1 --image ubuntu-20.04 --ssh-key <your_ssh_key> --network network-kubernetes --placement-group group-spread

hcloud server create --datacenter nbg1-dc3 --type cx11 --name node-1 --image ubuntu-20.04 --ssh-key <your_ssh_key> --network network-kubernetes --placement-group group-spread

hcloud server create --datacenter nbg1-dc3 --type cx11 --name node-2 --image ubuntu-20.04 --ssh-key <your_ssh_key> --network network-kubernetes --placement-group group-spread

Step 3 – Create and apply firewall

Now that our servers are up and running, let’s create a firewall and restrict ingoing and outgoing traffic. You may need to customize the rules to match your requirements.

Create the firewall:

hcloud firewall create --name firewall-kubernetes

Allow incoming SSH and ICMP:

hcloud firewall add-rule firewall-kubernetes --description "Allow SSH In" --direction in --port 22 --protocol tcp --source-ips --source-ips ::/0

hcloud firewall add-rule firewall-kubernetes --description "Allow ICMP In" --direction in --protocol icmp --source-ips --source-ips ::/0

Allow outgoing ICMP, DNS, HTTP, HTTPS and NTP:

hcloud firewall add-rule firewall-kubernetes --description "Allow ICMP Out" --direction out --protocol icmp --destination-ips --destination-ips ::/0

hcloud firewall add-rule firewall-kubernetes --description "Allow DNS TCP Out" --direction out --port 53 --protocol tcp --destination-ips --destination-ips ::/0

hcloud firewall add-rule firewall-kubernetes --description "Allow DNS UDP Out" --direction out --port 53 --protocol udp --destination-ips --destination-ips ::/0

hcloud firewall add-rule firewall-kubernetes --description "Allow HTTP Out" --direction out --port 80 --protocol tcp --destination-ips --destination-ips ::/0

hcloud firewall add-rule firewall-kubernetes --description "Allow HTTPS Out" --direction out --port 443 --protocol tcp --destination-ips --destination-ips ::/0

hcloud firewall add-rule firewall-kubernetes --description "Allow NTP UDP Out" --direction out --port 123 --protocol udp --destination-ips --destination-ips ::/0

Apply the firewall rules to all three servers:

hcloud firewall apply-to-resource firewall-kubernetes --type server --server master-1
hcloud firewall apply-to-resource firewall-kubernetes --type server --server node-1
hcloud firewall apply-to-resource firewall-kubernetes --type server --server node-2

Step 4 – Install K3S

It’s showtime for K3S. Before we prepare our master node and agent nodes, first upgrade the system and install AppArmor. SSH into your newly created VMs and run this command on all of then:

apt update && apt upgrade -y && apt install apparmor apparmor-utils -y

Step 4.1 – Install K3S on master node

SSH into your master node and run the following command to install and start the K3S server:

curl -sfL | sh -s - server \
    --disable-cloud-controller \
    --disable metrics-server \
    --write-kubeconfig-mode=644 \
    --disable local-storage \
    --node-name="$(hostname -f)" \
    --cluster-cidr="" \
    --kube-controller-manager-arg="address=" \
    --kube-controller-manager-arg="bind-address=" \
    --kube-proxy-arg="metrics-bind-address=" \
    --kube-scheduler-arg="address=" \
    --kube-scheduler-arg="bind-address=" \
    --kubelet-arg="cloud-provider=external" \
    --token="<your_secret_token>" \
    --tls-san="$(hostname -I | awk '{print $2}')" \

You can read more about the applied options in the K3S documentation. In short:

  • We disable the integrated cloud controller because we’ll install Hetzner’s Cloud Controller Manager in the next step.
  • We disable the metrics server to save some memory.
  • We disable the local storage because we’ll use GlusterFS.
  • We set the Cluster CIDR to
  • We make Kube Controller, Kube Proxy and Kube Scheduler listen on any address (which is not an issue as we’ve applied firewall rules and the nodes communicate with each other using the private network).
  • We set the shared secret token to <your_secret_token>.
  • We add the server’s private IPv4 (should be as an additional subject name to the TLS cert.
  • We make Flannel use ens10, which should be the interface of our private network.

Step 4.2 – Install Hetzner Cloud Controller Manager

Still on your master node, install the Hetzner Cloud Controller Manager:

kubectl -n kube-system create secret generic hcloud --from-literal=token=<hetzner_api_token> --from-literal=network=network-kubernetes

kubectl apply -f

Step 4.3 – Install System Upgrade Controller (optional)

The System Upgrade Controller performs automatic updates of K3S. If you want to use this festure, install the controller using this command:

kubectl apply -f

Step 4.4 – Install K3S on agent nodes

Now that our K3S server is up and running, SSH into your two agent nodes and run the following command to install the K3S agent and connect it to the server:

curl -sfL | K3S_URL= K3S_TOKEN=<your_secret_token> sh -s - agent \
    --node-name="$(hostname -f)" \
    --kubelet-arg="cloud-provider=external" \

You can read more about the applied options in the K3S documentation. In short:

  • We disable the integrated cloud controller because we’ve already installed Hetzner’s Cloud Controller Manager.
  • We make Flannel use ens10, which should be the interface of our private network.

Step 5 – Install Kadalu (optional)

Kadalu storage is an opinionated GlusterFS distribution. It’s a free and open source software for creating scalable network file systems. You can use it to replicate files to all your VMs so that your pods can access their persistent storage no matter which node they are running on.

Step 5.1 – Install Kadalu

On your master node, run:

curl -LO
chmod +x ./kubectl-kadalu
mv ./kubectl-kadalu /usr/local/bin/kubectl-kadalu
kubectl-kadalu version
kubectl kadalu install

Step 5.2 – Set up the cluster

On all of your nodes, create the storage directory:

mkdir -p /data/storage-pool-1

Then create the volume using kubectl on your master node:

kubectl kadalu storage-add storage-pool-1 --type=Replica3 \
    --path master-1:/data/storage-pool-1 \
    --path node-1:/data/storage-pool-1 \
    --path node-2:/data/storage-pool-1

Step 6 – Set up load balancing

We’ll use Hetzner’s load balancer for SSL offloading and for routing HTTP requests to your K3S setup.

Step 6.1 – Enable proxy protocol in Traefik

To use the proxy protocol, enable it in your K3S’ Traefik configuration by setting the cloud load balancer as a trusted IP address:

cat <<EOF > /var/lib/rancher/k3s/server/manifests/traefik-config.yaml
kind: HelmChartConfig
  name: traefik
  namespace: kube-system
  valuesContent: |-
      - "--entryPoints.web.proxyProtocol.trustedIPs="
      - "--entryPoints.web.forwardedHeaders.trustedIPs="

Step 6.2 – Create the load balancer

Create the load balancer and attach it to the private network using the static private IP

hcloud load-balancer create --type lb11 --location nbg1 --name lb-kubernetes

hcloud load-balancer attach-to-network --network network-kubernetes --ip lb-kubernetes

Add your three VMs as targets and make sure traffic is routed using the private network:

hcloud load-balancer add-target lb-kubernetes --server master-1 --use-private-ip
hcloud load-balancer add-target lb-kubernetes --server node-1 --use-private-ip
hcloud load-balancer add-target lb-kubernetes --server node-2 --use-private-ip

Let Hetzner create a managed Let’s Encrypt certificate for <> and get the certificate ID <certificate_id>:

hcloud certificate create --domain <> --type managed --name cert-t1

hcloud certificate list

Add the HTTP service for <> using proxy protocol and enable the health check:

hcloud load-balancer add-service lb-kubernetes --protocol https --http-redirect-http --proxy-protocol --http-certificates <certificate_id>
hcloud load-balancer update-service lb-kubernetes --listen-port 443 --health-check-http-domain <>

This will route HTTP requests from Hetzner’s load balancer (which performs SSL offloading) to your Kubernetes’ Traefik reverse proxy, which in turn routed the request to configured ingress routes. Alternatively, you can route incoming HTTP requests directly from Hetzner’s load balancer to an exposed service in your Kubernetes cluster, skipping Traefik. I’ve chosen to use Traefik as it i.e. allows me to set additional HTTP response headers.

Step 7 – Test your setup (optional)

Your K3S setup is now complete. It’s time to test your setup by deploying an nginx pod and publish the HTTP service via K3S’ integrated Traefik.

Create an nginx deployment, mount the GlusterFS volume for the static content, expose HTTP port 80 using a service and create a Traefik ingress route for your domain <>:

cat <<"EOF" | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
  name: pv1
    - ReadWriteMany
      storage: 500M
apiVersion: apps/v1
kind: Deployment
  name: webtest1
  replicas: 1
      app: webtest1
        app: webtest1
        - name: volume-webtest1
            claimName: pv1
      - image: nginx
        name: nginx
        - name: port-nginx
          containerPort: 80
          - mountPath: "/usr/share/nginx/html"
            name: volume-webtest1
            readOnly: false
apiVersion: v1
kind: Service
  name: webtest1
    - port: 80
      protocol: TCP
      targetPort: 80
    app: webtest1
  type: ClusterIP
kind: IngressRoute
  name: webtest1
    - web
  - match: Host(`<>`)
    kind: Rule
    - name: webtest1
      port: 80

Get the pod name of your running container, enter its shell and place some content on Kadalu’s GlusterFS volume:

kubectl get pod | grep webtest1
kubectl exec <name_of_pod> -- /bin/sh
echo "Hello world!" > /usr/share/nginx/html/index.html

In Hetzner’s cloud console, your load balancer should turn to healthy green and you should be able to access your website: https://<>

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.