Reference
Resource Requirements¶
Overview¶
This document provides complete resource specifications for GitLab BDA. All values are for 2-5 concurrent users unless otherwise specified.
Total resources (2-5 users):
CPU: 2.5 cores (requests), 5 cores (limits)
Memory: 4Gi (requests), 8Gi (limits)
Block Storage: 50Gi total (20Gi Hetzner Volumes + 30Gi Longhorn)
Object Storage: Variable (50-200GB typical)
Component Resource Breakdown¶
GitLab Components¶
Component |
Replicas |
CPU Request |
CPU Limit |
Memory Request |
Memory Limit |
|---|---|---|---|---|---|
Webservice |
2 |
300m × 2 = 600m |
1000m × 2 = 2000m |
2Gi × 2 = 4Gi |
2.5Gi × 2 = 5Gi |
Workhorse |
2 |
Included |
Included |
Included |
Included |
Sidekiq |
1 |
200m |
500m |
1.5Gi |
2Gi |
Gitaly |
1 |
200m |
500m |
768Mi |
1536Mi |
Shell |
1 |
50m |
200m |
128Mi |
256Mi |
Pages |
1 |
50m |
200m |
128Mi |
256Mi |
Toolbox |
1 |
100m |
300m |
256Mi |
512Mi |
GitLab Runner |
3 |
100m × 3 = 300m |
500m × 3 = 1500m |
256Mi × 3 = 768Mi |
512Mi × 3 = 1536Mi |
GitLab Total:
CPU: 1.5 cores (requests), 4.9 cores (limits)
Memory: 7.4Gi (requests), 11.1Gi (limits)
External Dependencies¶
Component |
Replicas |
CPU Request |
CPU Limit |
Memory Request |
Memory Limit |
|---|---|---|---|---|---|
PostgreSQL (CNPG) |
2 |
100m × 2 = 200m |
500m × 2 = 1000m |
256Mi × 2 = 512Mi |
512Mi × 2 = 1Gi |
PgBouncer Pooler |
2 |
50m × 2 = 100m |
200m × 2 = 400m |
64Mi × 2 = 128Mi |
128Mi × 2 = 256Mi |
Redis |
1 |
100m |
500m |
128Mi |
512Mi |
Dependencies Total:
CPU: 400m (requests), 1.9 cores (limits)
Memory: 768Mi (requests), 1.75Gi (limits)
Harbor Components¶
Component |
Replicas |
CPU Request |
CPU Limit |
Memory Request |
Memory Limit |
|---|---|---|---|---|---|
Core |
1 |
100m |
250m |
128Mi |
256Mi |
Registry |
1 |
100m |
250m |
128Mi |
256Mi |
JobService |
1 |
50m |
200m |
64Mi |
128Mi |
Portal |
1 |
50m |
200m |
64Mi |
128Mi |
Harbor Total:
CPU: 300m (requests), 900m (limits)
Memory: 384Mi (requests), 768Mi (limits)
Grand Total (All Components)¶
Resource |
Request |
Limit |
|---|---|---|
CPU |
2.2 cores |
7.7 cores |
Memory |
8.5Gi |
13.6Gi |
Storage Requirements¶
Block Storage (Persistent Volumes)¶
Hetzner Cloud Volumes¶
Volume |
Size |
Storage Class |
Used By |
Purpose |
|---|---|---|---|---|
gitaly-data |
20Gi |
|
Gitaly |
Git repositories |
Total Hetzner Volumes: 20Gi
Cost: €1/month (€0.05/GB/month × 20GB)
Longhorn Volumes¶
PVC |
Size |
Storage Class |
Replicas |
Actual Storage |
Used By |
Purpose |
|---|---|---|---|---|---|---|
gitlab-postgres-1 |
10Gi |
|
1 |
10Gi |
PostgreSQL Primary |
Database |
gitlab-postgres-2 |
10Gi |
|
1 |
10Gi |
PostgreSQL Standby |
Database |
redis-data |
10Gi |
|
2 |
20Gi |
Redis |
Cache + Job Queue |
Total Longhorn (logical): 30Gi Total Longhorn (actual): 40Gi (with replication)
Calculation:
PostgreSQL: 10Gi × 2 instances × 1 replica = 20Gi
Redis: 10Gi × 2 replicas = 20Gi
Total: 40Gi cluster storage
Object Storage (S3)¶
Bucket |
Purpose |
Est. Size (2-5 users) |
Growth Rate |
|---|---|---|---|
artifacts |
CI/CD artifacts |
5-20GB |
1-2GB/month |
uploads |
User uploads |
1-5GB |
0.5-1GB/month |
lfs |
Git LFS objects |
0-50GB |
0-5GB/month |
pages |
Static sites |
0.1-5GB |
0.1-0.5GB/month |
registry |
Container images |
10-100GB |
2-10GB/month |
backups |
GitLab backups |
20-50GB |
Stable (rotation) |
postgresbackups |
CNPG backups |
5-20GB |
Stable (retention 30d) |
cache |
Build cache |
5-20GB |
Stable (LRU) |
Total S3 (typical): 50-200GB Cost (150GB avg): €1.50/month (€0.01/GB/month)
Scaling Guidelines¶
10-20 Users¶
Changes needed:
replicas:
webservice: 3 # +1 (more HTTP capacity)
sidekiq: 2 # +1 (more job processing)
runners: 5 # +2 (more concurrent CI jobs)
storage:
longhorn:
redis: 20Gi # 2× (more cache)
postgresql: 20Gi # 2× per instance (larger database)
Resource impact:
CPU: +1 core (requests), +2 cores (limits)
Memory: +3Gi (requests), +5Gi (limits)
Storage: +20Gi Longhorn, +100GB S3
50+ Users¶
Major changes needed:
replicas:
webservice: 5
sidekiq: 4
runners: 10
# New components:
- Gitaly Cluster (Praefect) - 3 Gitaly instances
- Redis Sentinel - 3 Redis instances
- Separate Harbor PostgreSQL
Resource impact:
CPU: +5 cores (requests), +10 cores (limits)
Memory: +15Gi (requests), +25Gi (limits)
Storage: +100Gi Longhorn, +500GB S3
For scaling architecture, see GitLab Components: Scaling.
Node Requirements (KUP6S Cluster)¶
Current Allocation¶
GitLab BDA pods scheduled on:
ARM64 nodes (3 nodes): GitLab pods, PostgreSQL, Redis
AMD64 nodes (2 nodes): Harbor pods (AMD64-only images)
Node capacity (per node):
ARM64 (CAX11): 2 vCPUs, 4GB RAM
AMD64 (CX22): 2 vCPUs, 4GB RAM
Cluster total:
CPU: 10 vCPUs (3×2 ARM + 2×2 AMD)
Memory: 20GB (3×4 ARM + 2×4 AMD)
GitLab BDA usage (% of cluster):
CPU requests: 2.2 / 10 = 22%
CPU limits: 7.7 / 10 = 77% (overcommit)
Memory requests: 8.5Gi / 20GB = 42%
Memory limits: 13.6Gi / 20GB = 68% (overcommit)
Conclusion: Adequate capacity for 2-5 users. Scaling to 10-20 users requires adding 1-2 nodes.
Resource Optimization¶
Current Optimizations¶
External PostgreSQL/Redis - Dedicated resources (not bundled with GitLab)
Longhorn redundant-app - 1 replica for PostgreSQL (app-level replication)
PgBouncer Pooler - Connection pooling (reduces DB connections)
S3 object storage - Offload large files (artifacts, images, uploads)
Future Optimizations¶
When scaling beyond 20 users:
Horizontal Pod Autoscaling (HPA)
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler spec: scaleTargetRef: name: gitlab-webservice minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70
Vertical Pod Autoscaling (VPA) - Auto-adjust requests/limits
Node Affinity - Spread pods across nodes for HA
Resource Quotas - Prevent resource exhaustion
apiVersion: v1 kind: ResourceQuota metadata: name: gitlabbda-quota spec: hard: requests.cpu: "5" requests.memory: 15Gi limits.cpu: "10" limits.memory: 25Gi
Backup Storage Requirements¶
GitLab Backups (S3)¶
Backup size (full backup):
Database dump: 5-10GB (2-5 users)
Repositories tar: 10-20GB (depends on repo count)
Uploads: 1-5GB
LFS: 0-50GB
Total per backup: 20-80GB
Retention (7 daily backups):
Total: 140-560GB
Recommendation: Keep 7 daily, 4 weekly, 3 monthly
Daily: 7 × 50GB = 350GB
Weekly: 4 × 50GB = 200GB (incremental)
Monthly: 3 × 50GB = 150GB (incremental)
Total: ~700GB (with deduplication)
PostgreSQL Backups (S3)¶
Base backup size: 5-10GB (full database snapshot) WAL archives: 1-5GB/day (write-ahead logs) Retention (30 days):
Base backups: 10 backups × 10GB = 100GB
WAL archives: 30 days × 5GB = 150GB
Total: 250GB
Performance Benchmarks¶
Expected Performance (2-5 Users)¶
Operation |
Metric |
Target |
Actual |
|---|---|---|---|
Git clone (1GB repo) |
Duration |
< 30s |
15-25s |
Git push (100MB) |
Duration |
< 10s |
5-10s |
Web UI (issue view) |
Latency (p95) |
< 500ms |
200-400ms |
CI/CD job (npm install + test) |
Duration |
< 5 min |
3-4 min |
Docker push (500MB image) |
Duration |
< 2 min |
1-2 min |
Docker pull (500MB image) |
Duration |
< 1 min |
30-60s |
Database Performance¶
Metric |
Target |
Notes |
|---|---|---|
Max connections |
200 |
Pooler handles 1000 client → 25 backend |
Queries/sec |
< 1000 |
Typical: 100-300 q/s |
Replication lag |
< 1s |
Typically < 100ms |
Slow queries (>1s) |
< 1% |
Monitor with Loki |
Storage Performance¶
Storage Tier |
Read Throughput |
Write Throughput |
IOPS |
|---|---|---|---|
Hetzner Volumes |
250-500 MB/s |
100-250 MB/s |
5000-15000 |
Longhorn (2 replicas) |
100-300 MB/s |
50-150 MB/s |
1000-5000 |
Hetzner S3 |
10-100 MB/s |
10-50 MB/s |
N/A (object storage) |
Resource Monitoring¶
Key Metrics to Watch¶
CPU:
container_cpu_usage_seconds_total- CPU usage per containerkube_pod_container_resource_requests_cpu_cores- CPU requestskube_pod_container_resource_limits_cpu_cores- CPU limits
Memory:
container_memory_working_set_bytes- Actual memory usagekube_pod_container_resource_requests_memory_bytes- Memory requestscontainer_memory_failures_total- OOMKills
Storage:
kubelet_volume_stats_used_bytes- PVC usagelonghorn_volume_actual_size_bytes- Longhorn volume sizelonghorn_volume_usage_bytes- Longhorn volume usage
For monitoring setup, see Monitoring & Observability.
Summary¶
Current resources (2-5 users):
Compute: 2.2 CPU cores (requests), 8.5Gi memory (requests)
Block storage: 60Gi (20Gi Hetzner + 40Gi Longhorn actual)
Object storage: 50-200GB (typical 150GB)
Cost breakdown:
Cluster nodes: Managed by KUP6S cluster (shared cost)
Hetzner Volumes: €1/month (20Gi)
Longhorn: Included (cluster storage)
Hetzner S3: €1.50/month (150GB avg)
Total GitLab BDA storage: ~€2.50/month
Scaling thresholds:
2-5 users: Current configuration
10-20 users: +1 webservice, +1 sidekiq, +2 runners, 2× storage
50+ users: Horizontal scaling (Gitaly Cluster, Redis Sentinel, separate Harbor DB)
For configuration, see Configuration Reference.