How-To Guide
Back Up K3S Encryption Keys¶
Goal: Back up the encryption configuration required to restore etcd backups.
Time: ~5 minutes
Critical: Without encryption keys, your etcd backups cannot be restored. This makes disaster recovery impossible.
Why This Matters¶
K3S encrypts Kubernetes secrets at rest in etcd using AES-CBC encryption. The encryption keys are stored in /var/lib/rancher/k3s/server/cred/encryption-config.json on each control plane node.
For disaster recovery, you need:
✅ etcd S3 backup (automated)
✅ Encryption keys (THIS guide)
✅
.envcredentials file✅
kube.tfconfiguration (in git)
Missing the encryption keys = Failed recovery!
Prerequisites¶
Cluster already deployed
SSH access to control plane nodes
kubectl access configured (Access kubectl guide)
Secure password manager (KeePass, 1Password, etc.)
Step 1: Get Control Plane Node IPs¶
# Set kubeconfig
export KUBECONFIG=/path/to/kup6s/kube-hetzner/kup6s_kubeconfig.yaml
# List control plane nodes
kubectl get nodes -l node-role.kubernetes.io/control-plane -o wide
# Output example:
# NAME INTERNAL-IP EXTERNAL-IP
# kup6s-control-fsn1 10.0.1.2 <ip-fsn1>
# kup6s-control-nbg1 10.0.2.2 <ip-nbg1>
# kup6s-control-hel1 10.0.3.2 <ip-hel1>
Note the EXTERNAL-IP addresses for each control plane node.
Step 2: Download Encryption Config from Each Node¶
# Create backup directory
mkdir -p ~/kup6s-encryption-backup
cd ~/kup6s-encryption-backup
# Download from each control plane node
ssh root@<ip-fsn1> "cat /var/lib/rancher/k3s/server/cred/encryption-config.json" > control-fsn1-encryption.json
ssh root@<ip-nbg1> "cat /var/lib/rancher/k3s/server/cred/encryption-config.json" > control-nbg1-encryption.json
ssh root@<ip-hel1> "cat /var/lib/rancher/k3s/server/cred/encryption-config.json" > control-hel1-encryption.json
Step 3: Verify All Files Are Identical¶
# Compare file hashes
sha256sum control-*-encryption.json
Expected result: All three files should have the same hash.
Example output:
a1b2c3d4... control-fsn1-encryption.json
a1b2c3d4... control-nbg1-encryption.json
a1b2c3d4... control-hel1-encryption.json
If hashes differ, the cluster may have inconsistent encryption configuration - investigate before proceeding.
Step 4: Store in Password Manager¶
Since all three files are identical, you only need to store one copy.
Recommended: KeePass Entry¶
Create new entry:
Title:
kup6s-k3s-encryption-configUsername:
kup6s-clusterNotes:
K3S Encryption Configuration Cluster: kup6s.com Backed up: [TODAY'S DATE] K3S version: v1.30.x+k3s1 Required for: etcd backup restoration Backup frequency: After any key rotation Retention: Same as etcd backups (indefinite)
Attachments: Add
control-fsn1-encryption.json
Tag entry: disaster-recovery, k8s-cluster, critical
Alternative: 1Password / Bitwarden¶
Create a secure note with:
Title: “KUP6S K3S Encryption Keys”
Content of
control-fsn1-encryption.jsonMetadata (dates, K3S version, cluster name)
Step 5: Verify Backup¶
# Extract from password manager and test
cat control-fsn1-encryption.json | jq .
# Should show valid JSON with 'resources' and 'kind' fields
Expected structure:
{
"kind": "EncryptionConfiguration",
"apiVersion": "apiserver.config.k8s.io/v1",
"resources": [
{
"resources": ["secrets"],
"providers": [...]
}
]
}
Step 6: Secure Cleanup¶
# Remove local copies after verifying password manager backup
cd ~/kup6s-encryption-backup
shred -vfz -n 3 control-*-encryption.json
cd ..
rmdir kup6s-encryption-backup
Why shred? Regular rm doesn’t securely erase data. shred overwrites the file before deletion.
When to Re-Backup¶
Back up encryption keys again when:
✅ After key rotation - When you run
k3s secrets-encrypt rotate✅ After K3S upgrade - If encryption configuration changes
✅ Before major cluster changes - As a precaution
✅ Every 6-12 months - Regular backup verification
Verifying Encryption is Enabled¶
# SSH to any control plane node
ssh root@<control-plane-ip>
# Check encryption status
k3s secrets-encrypt status
Expected output:
Encryption Status: Enabled
Current Rotation Stage: start
Server Encryption Hashes: All control plane nodes [...]
Testing Disaster Recovery¶
Recommended: Test restore procedure periodically (every 6 months).
Spin up a test cluster
Restore from etcd backup + encryption keys
Verify secrets are accessible
Document any issues discovered
See Disaster Recovery Plan for full restore procedure.
Troubleshooting¶
Files have different hashes¶
Cause: Encryption configuration inconsistency between control planes
Investigation:
# Compare file contents
diff <(jq -S . control-fsn1-encryption.json) \
<(jq -S . control-nbg1-encryption.json)
# Check K3S logs on affected node
ssh root@<affected-node> journalctl -u k3s -n 100
Solution: This indicates a cluster issue. Investigate K3S configuration and consider key rotation to resynchronize.
Cannot access password manager¶
Cause: Password manager unavailable during emergency
Prevention:
Store backup encryption config in multiple secure locations
Keep offline encrypted backup (GPG-encrypted USB drive in safe)
Document password manager recovery procedures
Lost encryption keys¶
Impact: etcd backups are permanently unusable without encryption keys
Options:
If cluster is still running: Extract keys immediately (this guide)
If cluster is lost and no backup: Data is unrecoverable
Prevention is critical:
Back up immediately after cluster deployment
Verify backups regularly
Store in multiple secure locations
Include in disaster recovery testing
Security Considerations¶
Encryption keys are sensitive:
Treat with same security as root passwords
Never commit to git
Never send via email/slack
Encrypt at rest (password manager database)
Limit access (need-to-know basis)
Best practices:
Store in encrypted password manager
Keep offline backup (encrypted USB in safe)
Include in annual security audits
Document in disaster recovery procedures
Next Steps¶
Verify Security Features - Check cluster security configuration
Apply Infrastructure Changes - Update cluster safely
Disaster Recovery Plan - Full recovery procedures