Tutorial
Deploy Your First KUP6S Cluster¶
In this tutorial, you’ll deploy a complete KUP6S Kubernetes cluster on Hetzner Cloud from scratch. By the end, you’ll have a fully functional K3S cluster with monitoring, storage, and GitOps capabilities.
What you’ll build¶
3 control plane nodes across 3 datacenters (Helsinki, Nuremberg, Falkenstein)
3 agent nodes (all ARM64 for cost efficiency)
Traefik ingress with Let’s Encrypt certificates
Longhorn distributed storage
Prometheus, Grafana, and Loki monitoring
ArgoCD for application deployments
Crossplane for S3 bucket management
Step 1: Prepare your local environment¶
Install required tools¶
OpenTofu (Infrastructure as Code):
curl --proto '=https' --tlsv1.2 -fsSL https://get.opentofu.org/install-opentofu.sh -o install-opentofu.sh
chmod +x install-opentofu.sh
./install-opentofu.sh --install-method deb
rm -f install-opentofu.sh
kubectl (Kubernetes CLI):
sudo apt-get update && sudo apt-get install -y kubectl
k9s (Kubernetes UI - optional but recommended):
# Download from https://github.com/derailed/k9s/releases
# For Debian/Ubuntu:
wget https://github.com/derailed/k9s/releases/download/v0.32.0/k9s_linux_amd64.deb
sudo apt install ./k9s_linux_amd64.deb
rm k9s_linux_amd64.deb
Verify installations:
tofu version
kubectl version --client
k9s version
Step 2: Set up Hetzner Cloud credentials¶
Create API token¶
Log in to Hetzner Cloud Console
Select your project or create a new one
Go to Security → API Tokens
Click Generate API Token
Name it
kup6s-clusterand set Read & Write permissionsCopy the token (you’ll need it in Step 4)
Create SSH key pair¶
ssh-keygen -t ed25519 -f ~/.ssh/kup6s -C "kup6s-cluster"
Press Enter for no passphrase (or set one if you prefer).
Step 3: Clone the repository¶
git clone https://github.com/your-org/kup6s.git
cd kup6s/kube-hetzner
Step 4: Configure environment variables¶
Create .env file¶
cp ../.env.example .env
Edit .env with your credentials¶
Open .env in your editor and set these required variables:
# Hetzner Cloud API Token (from Step 2)
export TF_VAR_hcloud_token="YOUR_HETZNER_API_TOKEN_HERE"
# Hetzner S3 Object Storage - Shared Credentials
# All S3 services (etcd backups, Crossplane, Loki) use these shared credentials
export TF_VAR_hetzner_s3_access_key="YOUR_S3_ACCESS_KEY"
export TF_VAR_hetzner_s3_secret_key="YOUR_S3_SECRET_KEY"
# etcd S3 Backup Configuration (Disaster Recovery - Helsinki region)
export TF_VAR_etcdbackup_s3_endpoint="https://hel1.your-objectstorage.com"
export TF_VAR_etcdbackup_s3_bucket="kup6s-etcd-backups"
# Production S3 Configuration (Crossplane + Loki - Falkenstein region)
export TF_VAR_production_s3_endpoint="https://fsn1.your-objectstorage.com"
export TF_VAR_loki_s3_bucket="kup6s-loki-logs"
# Longhorn CIFS Backup (Hetzner Storage Box)
export TF_VAR_longhorn_cifs_username="YOUR_STORAGEBOX_USERNAME"
export TF_VAR_longhorn_cifs_password="YOUR_STORAGEBOX_PASSWORD"
export TF_VAR_longhorn_cifs_url="//YOUR_STORAGEBOX_HOST/backup"
# Storage Box CSI Driver
export TF_VAR_storagebox_csi_username="YOUR_STORAGEBOX_USERNAME"
export TF_VAR_storagebox_csi_password="YOUR_STORAGEBOX_PASSWORD"
export TF_VAR_storagebox_csi_smbpath="//YOUR_STORAGEBOX_HOST/data"
# SMTP Configuration (for alerts)
export TF_VAR_smtp_host="smtp.example.com"
export TF_VAR_smtp_username="alerts@example.com"
export TF_VAR_smtp_password="YOUR_SMTP_PASSWORD"
Tip
Regional Strategy: We keep etcd backups in a separate region (hel1) from production data (fsn1) for geographic redundancy. This provides better disaster recovery protection.
Source the environment¶
source .env
Verify variables are set:
env | grep TF_VAR_hcloud_token
You should see your token (partially masked).
Step 5: Initialize OpenTofu¶
tofu init
You should see:
Initializing modules...
Initializing the backend...
Initializing provider plugins...
OpenTofu has been successfully initialized!
Step 6: Create OS snapshots¶
The kube-hetzner project uses pre-built openSUSE MicroOS snapshots for faster deployment:
tmp_script=$(mktemp)
curl -sSL -o "${tmp_script}" \
https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh
chmod +x "${tmp_script}"
"${tmp_script}"
rm "${tmp_script}"
This creates snapshots in your Hetzner project. They’re reused for all nodes.
Step 7: Review the deployment plan¶
tofu plan
This shows what will be created:
3 control plane servers (CAX11 ARM64) in hel1, nbg1, fsn1
1 agent server (CAX31 ARM64) in hel1 - 8 vCPU, 16GB RAM
2 agent servers (CAX21 ARM64) in hel1 - 4 vCPU, 8GB RAM each
1 load balancer (LB11) in nbg1
Network configuration (private network, firewall rules)
20 Kubernetes manifests (Crossplane, Traefik, Longhorn, monitoring, S3 buckets, etc.)
Review carefully. If you see unexpected changes to credentials or secrets, STOP and check your .env file.
Step 8: Deploy the cluster¶
Warning
This will create real infrastructure and incur costs on Hetzner Cloud. Current estimated cost: ~€42/month (all ARM64 nodes for cost efficiency).
Danger
CRITICAL: Use the apply-and-configure-longhorn.sh script, NOT tofu apply directly!
The script automatically configures Longhorn storage with correct reservations. Without it, Longhorn will be misconfigured.
bash scripts/apply-and-configure-longhorn.sh
The script runs tofu apply -auto-approve and then configures Longhorn storage.
Deployment takes ~10-15 minutes. You’ll see:
Creating network and firewall (1 min)
Creating servers (3-5 min)
Installing K3S on servers (5-8 min)
Deploying manifests (2-3 min)
Configuring Longhorn storage reservations (30 sec)
Tip
Grab a coffee! ☕ The automation is working.
Step 9: Access your cluster¶
Once deployment completes, a kubeconfig file is generated:
export KUBECONFIG=$(pwd)/kup6s_kubeconfig.yaml
kubectl get nodes
You should see:
NAME STATUS ROLES AGE VERSION
kup6s-control-fsn1 Ready control-plane,etcd,master 5m v1.30.x
kup6s-control-nbg1 Ready control-plane,etcd,master 5m v1.30.x
kup6s-control-hel1 Ready control-plane,etcd,master 5m v1.30.x
kup6s-agent-arm-3-hel1-0 Ready <none> 4m v1.30.x
kup6s-agent-arm-2-hel1-0 Ready <none> 4m v1.30.x
kup6s-agent-arm-2-hel1-1 Ready <none> 4m v1.30.x
Check all pods are running:
kubectl get pods --all-namespaces
Wait until all pods show Running status (may take 2-3 more minutes).
Step 10: Verify components¶
Check Crossplane¶
kubectl get pods -n crossplane-system
Should show crossplane and provider-aws-s3 pods running.
Check Traefik ingress¶
kubectl get pods -n kube-system -l app.kubernetes.io/name=traefik
Check Longhorn storage¶
kubectl get pods -n longhorn-system
Check monitoring stack¶
kubectl get pods -n monitoring
Should show Prometheus, Grafana, and Loki pods.
Check ArgoCD¶
kubectl get pods -n argocd
Step 11: Access web interfaces¶
Get your load balancer IP¶
kubectl get svc -n kube-system traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Access Grafana¶
# Get Grafana password
kubectl get secret -n monitoring kube-prometheus-stack-grafana \
-o jsonpath='{.data.admin-password}' | base64 -d
echo
Open browser: https://grafana.ops.kup6s.net
Username:
adminPassword: (from command above)
Note
You’ll need to configure DNS to point *.ops.kup6s.net to your load balancer IP, or add entries to /etc/hosts for testing.
Congratulations! 🎉¶
You’ve successfully deployed a production-ready Kubernetes cluster!
What you’ve learned¶
How to configure OpenTofu with environment variables
How to deploy kube-hetzner infrastructure
How to access and verify a Kubernetes cluster
How to check that all components are running
What’s next?¶
Deploy your first app - Learn to deploy applications via ArgoCD
Monitoring basics - Explore Grafana dashboards and query logs
How-To Guides - Learn specific tasks like creating S3 buckets
Troubleshooting¶
“Error: Invalid API token”¶
Check that you copied the Hetzner API token correctly
Ensure you sourced the .env file:
source .env
“Nodes not ready after 10 minutes”¶
# Check node details
kubectl describe node kup6s-control-plane-fsn1
# Check system pod logs
kubectl logs -n kube-system -l app=flannel
“Pods stuck in Pending”¶
Check if Longhorn is ready:
kubectl get pods -n longhorn-systemLonghorn needs a few minutes to initialize storage
Need more help?¶
See reference documentation for detailed component info
Check explanation section for architecture details