How-To Guide
Choose the Right Storage Class¶
Learn how to select the optimal Longhorn storage class for your application based on redundancy requirements and cost constraints.
Quick Decision Guide¶
Use longhorn-redundant-app (1 replica) when:
Your application has 2+ replicas with built-in data replication
Examples: CNPG PostgreSQL clusters, Redis clusters, Kafka
Use longhorn (2 replicas, default) when:
Single-instance applications
Standard data that needs protection but isn’t mission-critical
Examples: Prometheus, Loki, single-instance databases
Use longhorn-ha (3 replicas) when:
Mission-critical data that absolutely cannot be lost
Compliance or audit requirements
Examples: Financial records, audit logs, critical backups
Use longhorn-rwx (2 replicas, ReadWriteMany) when:
Multiple pods need to access the same volume simultaneously
Shared file storage across replicas
Examples: kup6s-pages (nginx + syncer), shared assets, CMS uploads
Understanding Storage Replication¶
The Redundancy Problem¶
When you combine application-level replication with storage-level replication, you multiply redundancy:
Bad Example: PostgreSQL with 3 CNPG replicas + 3 Longhorn replicas
Data replicated 3 times at database level (CNPG)
Each DB replica stored 3 times (Longhorn)
Total: 9 copies of data (wasteful!)
Good Example: PostgreSQL with 3 CNPG replicas + 1 Longhorn replica
Data replicated 3 times at database level (CNPG)
Each DB replica stored 1 time (Longhorn)
Total: 3 copies of data (optimal!)
Real-World Examples¶
Example 1: GitLab PostgreSQL (CNPG)¶
Application Setup:
CloudNativePG cluster with 3 PostgreSQL instances
Built-in streaming replication between instances
Automatic failover if primary fails
Storage Class Choice: longhorn-redundant-app
Rationale:
CNPG already provides 3 copies of data
Each pod needs its own volume (1 per replica)
Longhorn replication would be redundant
Saves 66% storage cost vs 3 Longhorn replicas
PVC Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-postgres-1
namespace: gitlabbda
spec:
storageClassName: longhorn-redundant-app
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Example 2: Prometheus (No Built-in Replication)¶
Application Setup:
2 Prometheus instances for HA
Each instance has independent storage
No data replication between instances (each scrapes independently)
Storage Class Choice: longhorn (default, 2 replicas)
Rationale:
Prometheus doesn’t replicate data between instances
Losing one instance’s storage = losing that historical data
2 Longhorn replicas protect against single node failure
Acceptable risk vs cost tradeoff
PVC Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-data-0
namespace: monitoring
spec:
storageClassName: longhorn # or omit (uses default)
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Example 3: Redis Cluster¶
Application Setup:
Redis cluster with 6 nodes (3 masters + 3 replicas)
Built-in Redis replication
Automatic failover via Redis Sentinel
Storage Class Choice: longhorn-redundant-app
Rationale:
Redis cluster already provides data redundancy
Each Redis node has replica of data
Longhorn should provide basic storage without duplication
Example 4: Audit Logs (Compliance Requirement)¶
Application Setup:
Single-instance audit log aggregator
Regulatory requirement to never lose logs
High compliance risk if data lost
Storage Class Choice: longhorn-ha (3 replicas)
Rationale:
No application-level replication
Must survive 2 simultaneous node failures
Compliance requirement justifies higher cost
Data Locality Impact¶
What is Data Locality?¶
Longhorn’s dataLocality parameter controls replica placement relative to the pod:
best-effort(used for 1 and 2 replica classes):Tries to place one replica on same node as pod
Faster: Local disk access (no network latency)
Flexible: Falls back if local placement impossible
disabled(used for 3 replica class):Replicas spread across nodes based on available space
Better HA: More distributed = survives more failure scenarios
Slower: All access over network
Performance Characteristics¶
Storage Class |
Replicas |
Locality |
Typical Read Latency |
Use Case |
|---|---|---|---|---|
longhorn-redundant-app |
1 |
best-effort |
<1ms (local) |
App with replication |
longhorn |
2 |
best-effort |
<1ms local, ~2-5ms remote |
General purpose |
longhorn-ha |
3 |
disabled |
~2-5ms (network) |
Mission-critical |
Cost Analysis¶
Storage Multiplication¶
For a 100Gi PVC:
Storage Class |
Replicas |
Actual Storage Used |
Monthly Cost* |
|---|---|---|---|
longhorn-redundant-app |
1 |
100Gi |
Baseline |
longhorn |
2 |
200Gi |
2x baseline |
longhorn-ha |
3 |
300Gi |
3x baseline |
*Cost is in disk space on nodes (no external charges)
Cluster-Wide Impact¶
Example cluster with:
10x PostgreSQL databases @ 10Gi each (CNPG with 3 replicas)
5x single-instance apps @ 5Gi each
Bad approach (all 3 replicas):
PostgreSQL: 10 × 10Gi × 3 DB replicas × 3 storage replicas = 900Gi
Apps: 5 × 5Gi × 3 = 75Gi
Total: 975Gi
Optimized approach:
PostgreSQL: 10 × 10Gi × 3 DB replicas × 1 storage replica = 300Gi
Apps: 5 × 5Gi × 2 = 50Gi
Total: 350Gi
Savings: 64%
Common Mistakes¶
Mistake 1: Using Default for Everything¶
Don’t:
# PostgreSQL with CNPG - using default (2 replicas)
storageClassName: longhorn # Wastes 2x storage
Do:
# PostgreSQL with CNPG - using redundant-app (1 replica)
storageClassName: longhorn-redundant-app # Optimal
Mistake 2: Over-Engineering for Development¶
Don’t:
# Dev environment database
storageClassName: longhorn-ha # Overkill for dev
Do:
# Dev environment database
storageClassName: longhorn # Adequate for dev
Mistake 3: Under-Protecting Critical Data¶
Don’t:
# Production financial database (single instance)
storageClassName: longhorn-redundant-app # Wrong - no app replication!
Do:
# Production financial database (single instance)
storageClassName: longhorn-ha # Correct - need storage HA
Changing Storage Classes¶
For New PVCs¶
Simply specify the desired storageClassName:
spec:
storageClassName: longhorn-redundant-app
For Existing PVCs¶
Warning: Cannot change storage class of existing PVC. Must migrate:
Create new PVC with desired storage class
Copy data from old to new PVC
Update application to use new PVC
Delete old PVC
Or: Delete and recreate (if data backed up elsewhere)