Reference
Longhorn Storage Classes¶
Complete technical reference for Longhorn storage classes in the kup6s cluster.
Overview¶
Four Longhorn storage classes provide different replication levels and access modes:
Class |
Replicas |
Access Mode |
Data Locality |
Default |
Use Case |
|---|---|---|---|---|---|
longhorn-redundant-app |
1 |
RWO |
best-effort |
No |
Apps with built-in replication |
longhorn |
2 |
RWO |
disabled* |
Yes |
General purpose storage |
longhorn-ha |
3 |
RWO |
disabled |
No |
Mission-critical data |
longhorn-rwx |
2 |
RWX |
disabled |
No |
Shared volumes (multi-pod) |
*Note: Currently disabled due to Longhorn Helm chart defaults. Can be customized if needed.
Storage Class Configurations¶
longhorn-redundant-app (1 replica)¶
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-redundant-app
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "1"
dataLocality: "best-effort"
staleReplicaTimeout: "30"
fsType: "xfs"
dataEngine: "v1"
When to use: Applications with N≥2 replicas (CNPG PostgreSQL, Redis clusters, Kafka)
longhorn (2 replicas) - DEFAULT¶
Managed by: Longhorn Helm chart (configured via longhorn_replica_count in kube.tf)
Configuration:
2 replicas per volume
XFS filesystem
Immediate volume binding
Allows expansion
When to use: Default for most applications (Prometheus, Loki, general data)
longhorn-ha (3 replicas)¶
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-ha
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
dataLocality: "disabled"
staleReplicaTimeout: "30"
fsType: "xfs"
dataEngine: "v1"
When to use: Mission-critical data (audit logs, backups, compliance data)
longhorn-rwx (2 replicas, ReadWriteMany)¶
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-rwx
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "2"
staleReplicaTimeout: "30"
fsType: "ext4"
nfsOptions: "vers=4.1,noresvport,softerr,timeo=600,retrans=5"
When to use: Volumes that need to be mounted by multiple pods simultaneously:
Static site hosting (kup6s-pages nginx + syncer)
Shared assets across application replicas
Multi-pod file processing pipelines
How it works: Longhorn (v1.5+) creates an NFS export from the volume, allowing multiple pods to mount it concurrently. The underlying Longhorn volume still maintains 2 replicas for data redundancy.
Note: Uses ext4 instead of xfs because NFS export works better with ext4 for shared access patterns.
Parameter Reference¶
numberOfReplicas¶
Values: “1”, “2”, “3”
Effect: Number of data copies across cluster nodes
Cost impact: Linear (3 replicas = 3x storage)
dataLocality¶
best-effort: Try to place one replica on same node as pod (faster access, falls back if impossible)disabled: Distribute replicas purely by available space (maximum HA flexibility)
staleReplicaTimeout¶
Value: “30” (seconds)
Effect: Time before marking disconnected replica as stale
fsType¶
Value: “xfs”
Why: Better performance for large files vs ext4
dataEngine¶
Value: “v1”
Options: v1 (stable), v2 (experimental)
Changing Replica Counts¶
For Existing Volumes¶
# Reduce from 3 to 2 replicas
kubectl patch volume.longhorn.io -n longhorn-system <volume-name> \
--type=merge -p '{"spec":{"numberOfReplicas":2}}'
# Increase from 2 to 3 replicas
kubectl patch volume.longhorn.io -n longhorn-system <volume-name> \
--type=merge -p '{"spec":{"numberOfReplicas":3}}'
Longhorn will automatically add/remove replicas. Monitor progress in Longhorn UI.
For New Volumes¶
Cannot change StorageClass of existing PVC. Must create new PVC with desired class and migrate data.
Monitoring¶
Check Volume Status¶
kubectl get volumes.longhorn.io -n longhorn-system \
-o custom-columns=NAME:.metadata.name,REPLICAS:.spec.numberOfReplicas,STATE:.status.state,ROBUSTNESS:.status.robustness
Check Storage Class Usage¶
kubectl get pvc -A -o custom-columns=\
NAMESPACE:.metadata.namespace,\
NAME:.metadata.name,\
CLASS:.spec.storageClassName,\
SIZE:.spec.resources.requests.storage
Troubleshooting¶
Volume Shows “Degraded”¶
Cause: One or more replicas unhealthy
Check:
kubectl describe volume.longhorn.io -n longhorn-system <volume-name>
Fix: Longhorn will automatically rebuild. If stuck, check node disk space.
Out of Storage Space¶
Solutions:
Reduce replica count on appropriate volumes
Clean up unused PVCs
Add storage nodes to cluster
Infrastructure Configuration¶
Storage Classes: kube-hetzner/extra-manifests/40-F-longhorn-storage-classes.yaml.tpl
Default Replica Count: kube-hetzner/kube.tf line 447 (longhorn_replica_count = 2)
Note: Changing longhorn_replica_count only affects the Longhorn-managed longhorn StorageClass.