Optimize GitLab for Memory-Constrained Environments¶
Applies to: Small teams (2-5 users) with limited cluster resources
Goal: Reduce GitLab memory usage by ~2GB through performance tuning instead of increasing resource limits
Prerequisites:
GitLab BDA deployment using CDK8S (
dp-infra/gitlabbda/)kubectl access to the cluster
Basic understanding of GitLab architecture (webservice + sidekiq)
Overview¶
GitLab can consume significant memory even for small teams. This guide shows how to optimize Puma (webservice) and Sidekiq (background jobs) for memory-constrained environments using official GitLab recommendations.
Expected outcomes:
Webservice: 1958-2128Mi → ~1050-1100Mi (50% reduction per pod)
Sidekiq: 1266Mi → ~987Mi (22% reduction)
Total savings: ~2.1-2.5GB across all pods
No resource limit increases needed (cost-effective)
Step 1: Verify Current Memory Usage¶
Check current memory consumption to establish baseline:
kubectl top pods -n gitlabbda -l 'app in (webservice,sidekiq)' --containers
Look for:
Webservice pods near or exceeding their 2Gi memory request
Sidekiq pods using >1Gi memory
High memory pressure (>90% utilization)
Step 2: Update Performance Configuration¶
Edit the GitLab configuration file to enable memory-constrained tuning:
cd dp-infra/gitlabbda
Add or update the performance section in config.yaml:
# Performance Tuning for Memory-Constrained Environment (2-5 users)
# Based on: https://docs.gitlab.com/omnibus/settings/memory_constrained_envs.html
performance:
webservice:
# Puma configuration
# Single-process mode (workerProcesses: 0) saves 100-400 MB per replica
# For 2 replicas: ~200-800 MB total savings
workerProcesses: 0 # Was: 2 (single-process mode for low traffic)
threadsMin: 4 # Keep at 4 (handles concurrent requests)
threadsMax: 4
disableWorkerKiller: true # Disable memory killer in single-process mode
sidekiq:
# Concurrency configuration
# Reducing from 20 to 10 saves ~400-500 MB
# Concurrency of 10 is more than sufficient for 2-5 users
concurrency: 10 # Was: 20 (background job parallelism)
memoryKillerMaxRss: 1000000 # Was: 2000000 (1GB limit, down from 2GB)
Configuration explained:
workerProcesses: 0: Single-process Puma modeEach worker process copies Rails application into memory
Single process = no duplication = major memory savings
Sufficient for low-concurrency environments (2-5 users)
Performance impact: Minimal for small teams
threadsMin/Max: 4: Thread pool sizeHandles concurrent requests within single process
4 threads balances concurrency vs memory
Keep this even in single-process mode
disableWorkerKiller: true: Disable Puma worker memory killerNot needed in single-process mode (no workers to kill)
Avoids unnecessary restart cycles
concurrency: 10: Sidekiq job parallelismReduces number of concurrent background jobs
Each job consumes memory
10 concurrent jobs sufficient for small teams
GitLab will still process all jobs, just with lower parallelism
memoryKillerMaxRss: 1000000: Sidekiq memory limit (1GB)Restarts Sidekiq if it exceeds 1GB memory
Prevents runaway memory growth
Lower than default 2GB due to reduced concurrency
Step 3: Rebuild and Deploy¶
Generate new Kubernetes manifests with the performance configuration:
# From dp-infra/gitlabbda/ directory
npm run build
Verify the build:
# Check that environment variables are set correctly
grep -A 2 "WORKER_PROCESSES\|SIDEKIQ_CONCURRENCY" manifests/gitlab.k8s.yaml
# Expected output:
# - name: WORKER_PROCESSES
# value: "0"
# - name: SIDEKIQ_CONCURRENCY
# value: "10"
Commit and push the changes:
git add config.yaml charts/ manifests/
git commit -m "perf: optimize GitLab for memory-constrained environment"
git push
Step 4: Trigger ArgoCD Sync¶
ArgoCD will automatically sync the changes if auto-sync is enabled. To manually trigger:
# Trigger hard refresh to pick up latest commit
kubectl annotate application gitlabbda-app-* -n argocd \
argocd.argoproj.io/refresh=hard --overwrite
# Monitor sync status
kubectl get application -n argocd -l app.kubernetes.io/name=gitlabbda
Expected sync behavior:
Status changes: OutOfSync → Syncing → Synced
Health changes: Progressing → Healthy
No comparison errors (earlier versions had duplicate env var issues)
Step 5: Monitor Pod Rollout¶
Watch the rolling update of webservice and sidekiq pods:
# Monitor pod recreation
kubectl get pods -n gitlabbda -l 'app in (webservice,sidekiq)' -w
# Expected behavior:
# 1. New pods created with Init status
# 2. Init containers complete (2-5 minutes)
# 3. New pods become Ready
# 4. Old pods terminate gracefully
Rollout duration: Typically 5-10 minutes total
Init phase: 2-5 minutes per pod
Readiness probes: 30-60 seconds
Graceful termination of old pods: 30 seconds
Step 6: Verify Memory Savings¶
After all new pods are Ready, check actual memory usage:
kubectl top pods -n gitlabbda -l 'app in (webservice,sidekiq)' --containers
Expected results:
Webservice: ~1050-1100 Mi (down from 1958-2128 Mi)
Sidekiq: ~987 Mi (down from 1266 Mi)
Total savings: ~2.1-2.5 GB
Utilization percentages:
Webservice: 50-53% of 2Gi request (healthy headroom)
Sidekiq: 64% of 1.5Gi request (healthy headroom)
Step 7: Validate GitLab Functionality¶
Ensure GitLab remains fully functional after optimization:
Web UI: Access GitLab and navigate through projects
Git operations: Clone, push, pull from a repository
CI/CD pipelines: Trigger a pipeline run
Background jobs: Check Sidekiq queue processing
kubectl logs -n gitlabbda -l app=sidekiq --tail=50
Performance expectations:
Web UI: No noticeable difference (still responsive)
Git operations: Unchanged (not affected by Puma workers)
CI/CD: May see slightly longer queue times under heavy load
Background jobs: Processing may be slower under high load (10 vs 20 concurrent)
Troubleshooting¶
Pods stuck in Init phase¶
Symptom: New pods remain in Init:0/3 or Init:1/3 for >5 minutes
Cause: Usually network or dependency issues (PostgreSQL, Redis)
Solution:
# Check init container logs
kubectl logs -n gitlabbda <pod-name> -c certificates
kubectl logs -n gitlabbda <pod-name> -c configure
kubectl logs -n gitlabbda <pod-name> -c dependencies
# Check PostgreSQL connectivity
kubectl get pods -n gitlabbda -l app=postgresql
Memory usage not decreasing¶
Symptom: Memory usage remains high after rollout
Cause: Either old pods still running or configuration not applied
Solution:
# Verify new pods are running (not old ones)
kubectl get pods -n gitlabbda -l app=webservice -o wide
# Check AGE column - should be <30 minutes
# Verify environment variables in running pod
kubectl exec -n gitlabbda <webservice-pod> -c webservice -- env | grep WORKER_PROCESSES
# Should output: WORKER_PROCESSES=0
kubectl exec -n gitlabbda <sidekiq-pod> -- env | grep SIDEKIQ_CONCURRENCY
# Should output: SIDEKIQ_CONCURRENCY=10
ArgoCD ComparisonError with duplicate environment variables¶
Symptom: ArgoCD shows “ComparisonError: duplicate environment variables”
Cause: Using extraEnv instead of Helm chart’s native parameters (fixed in commit 7d182f0)
Solution: Ensure you’re using the correct Helm chart parameters:
Use
gitlab.webservice.workerProcesses(NOTextraEnv.WORKER_PROCESSES)Use
gitlab.webservice.puma.threads(NOTextraEnv.PUMA_THREADS_*)Use
gitlab.sidekiq.concurrency(NOTextraEnv.SIDEKIQ_CONCURRENCY)
Reference: gitlab-helm.ts
Performance degradation under load¶
Symptom: GitLab slow during peak usage or CI/CD runs
Cause: Single-process Puma and reduced Sidekiq concurrency hitting limits
Solution: This configuration is optimized for 2-5 users with low traffic. If you experience degradation:
Increase Puma threads first (memory-efficient):
performance: webservice: threadsMin: 6 threadsMax: 6 # Increase from 4 to 6
If still slow, add one Puma worker (costs ~300-500 MB):
performance: webservice: workerProcesses: 1 # Increase from 0 to 1 disableWorkerKiller: false # Re-enable worker killer
Last resort: increase Sidekiq concurrency:
performance: sidekiq: concurrency: 15 # Increase from 10 to 15 memoryKillerMaxRss: 1500000 # Increase limit accordingly
Monitor memory usage after each change.
Reverting Changes¶
If you need to revert to default configuration:
# Remove or comment out the performance section in config.yaml
# performance:
# webservice:
# workerProcesses: 0
# ...
# Or restore defaults:
performance:
webservice:
workerProcesses: 2 # Default for production
threadsMin: 4
threadsMax: 4
disableWorkerKiller: false
sidekiq:
concurrency: 20 # Default
memoryKillerMaxRss: 2000000 # 2GB default
Then rebuild, commit, push, and sync as in Steps 3-4.
Implementation Notes¶
Date: 2025-11-18 Commits:
91d21b8 - Initial performance tuning implementation
7d182f0 - Fix ArgoCD comparison error (extraEnv → Helm parameters)
Actual results achieved:
Webservice: 1050-1059 Mi (52-53% utilization, was 98-106%)
Sidekiq: 987 Mi (64% utilization, was 82%)
Total savings: 2.1-2.5 GB (exceeded 1.2 GB expectation)