Reference
Configuration¶
Overview¶
GitLab BDA is configured via config.yaml in the deployment root directory. This file defines all deployment parameters, from domain names to replica counts.
Configuration location: deployments/gitlabbda/config.yaml
How configuration is used:
// main.ts reads config.yaml
const config = yaml.load(fs.readFileSync('config.yaml', 'utf8'));
// Constructs receive configuration
new GitLabChart(app, 'gitlab', config);
Environment variable overrides: All configuration values can be overridden by environment variables (see Environment Variables Reference).
Complete Configuration Reference¶
deployment¶
Top-level deployment mode configuration
deployment.isFreshInstall¶
Type: boolean
Default: true
Required: Yes
Controls GitLab upgrade check hook handling.
Values:
true- Fresh install mode (removes upgrade check hook completely)false- Upgrade mode (keeps upgrade check hook with ArgoCD-compatible annotations)
Why this matters:
GitLab Helm chart includes a pre-upgrade hook that ArgoCD can’t handle on initial install (no previous release to upgrade from). Setting isFreshInstall: true removes this hook for first deployment, preventing ArgoCD sync errors.
When to change:
First deployment:
true(default)After successful first deploy: Change to
false(allows GitLab upgrades)Upgrading GitLab version: Keep
false
Example:
deployment:
isFreshInstall: true # First deploy
After first successful deployment:
deployment:
isFreshInstall: false # Enable upgrade checks
For technical details, see CDK8S Approach: Helm Integration.
versions¶
Component version configuration
All version fields use git tag format (e.g., v18.5.1).
versions.gitlab¶
Type: string (git tag)
Default: v18.5.1
Required: Yes
GitLab Helm chart version (NOT GitLab application version).
Format: v{MAJOR}.{MINOR}.{PATCH}
Version correlation:
Helm chart version → GitLab application version (1:1 mapping in modern charts)
Chart v18.5.1 → GitLab CE 18.5.1
Where to find versions:
Helm command:
helm search repo gitlab/gitlab --versions
Example:
versions:
gitlab: v18.5.1 # Latest stable as of 2025-10-27
Upgrading:
# Before upgrade, check:
# - CHANGELOG for breaking changes
# - Database migration requirements
# - Backup completion
versions:
gitlab: v18.6.0 # New version
versions.gitlabRunner¶
Type: string (git tag)
Default: alpine-v18.5.0
Required: Yes
GitLab Runner image tag.
Format: alpine-v{MAJOR}.{MINOR}.{PATCH}
Why alpine prefix:
alpine- Minimal base image (smaller, faster)ubuntu- Full OS (use if alpine compatibility issues)
Version compatibility:
Runner version should match GitLab version (major.minor)
Patch version can differ (e.g., GitLab 18.5.1 + Runner 18.5.0 = OK)
Example:
versions:
gitlabRunner: alpine-v18.5.0
versions.harbor¶
Type: string (git tag)
Default: v2.14.0
Required: Yes
Harbor container registry version.
Format: v{MAJOR}.{MINOR}.{PATCH}
Where to find versions:
Check compatibility with PostgreSQL version (Harbor requires PostgreSQL 12+)
Example:
versions:
harbor: v2.14.0 # Latest stable
For version compatibility matrix, see Version Compatibility Reference.
namespace¶
Kubernetes namespace for all resources
Type: string
Default: gitlabbda
Required: Yes
Validation: Kubernetes DNS label format ([a-z0-9]([-a-z0-9]*[a-z0-9])?)
All GitLab BDA resources are deployed in this namespace.
Example:
namespace: gitlabbda
Changing namespace (advanced):
# Deploy separate GitLab instance in different namespace
namespace: gitlab-staging # Separate from production
Impact of changing:
New namespace created (ArgoCD Application namespace must match)
Secrets replicated to new namespace (via ESO)
All services use new namespace DNS (e.g.,
redis.gitlab-staging.svc.cluster.local)
domains¶
Public DNS domain configuration
All domains must have DNS A/AAAA records pointing to cluster LoadBalancer IP.
domains.gitlab¶
Type: string (FQDN)
Default: gitlab.staging.bluedynamics.eu
Required: Yes
Validation: Valid DNS name
Main GitLab web UI and API domain.
Used for:
Web UI:
https://gitlab.staging.bluedynamics.euGit HTTP:
https://gitlab.staging.bluedynamics.eu/user/repo.gitAPI:
https://gitlab.staging.bluedynamics.eu/api/v4/*
TLS certificate:
Automatic via cert-manager + Let’s Encrypt
Uses cluster issuer:
letsencrypt-cluster-issuer(from certManager config)
Example:
domains:
gitlab: gitlab.example.com # Production
# or
gitlab: gitlab.staging.example.com # Staging
domains.pages¶
Type: string (FQDN with wildcard support)
Default: pages.staging.bluedynamics.eu
Required: Yes
GitLab Pages wildcard domain.
DNS requirements:
Wildcard DNS record:
*.pages.staging.bluedynamics.eu→ LoadBalancer IPWithout wildcard: Each project needs separate DNS entry (not scalable)
Used for:
Project pages:
https://{project}.pages.staging.bluedynamics.euUser pages:
https://{username}.pages.staging.bluedynamics.eu
TLS certificate:
Wildcard certificate via cert-manager
Single cert for all
*.pages.staging.bluedynamics.eu
Example:
domains:
pages: pages.example.com # Requires *.pages.example.com DNS
domains.harbor¶
Type: string (FQDN)
Default: registry.staging.bluedynamics.eu
Required: Yes
Harbor container registry domain.
Used for:
Web UI:
https://registry.staging.bluedynamics.euDocker push/pull:
docker push registry.staging.bluedynamics.eu/project/image:tag
TLS certificate:
Automatic via cert-manager
Required for docker CLI (insecure registries not supported in production)
Example:
domains:
harbor: registry.example.com
For DNS setup, see How-To: Configure DNS (future).
s3¶
S3 object storage configuration (Hetzner)
All buckets use Hetzner S3 (fsn1 region).
s3.endpoint¶
Type: string (URL with protocol)
Default: https://fsn1.your-objectstorage.com
Required: Yes
Format: https://{region}.your-objectstorage.com
S3 endpoint URL for production buckets (artifacts, uploads, LFS, pages, registry, cache).
Available Hetzner S3 regions:
fsn1- Falkenstein, Germany (default, same region as cluster)nbg1- Nuremberg, Germanyhel1- Helsinki, Finland
Why fsn1:
Same datacenter region as cluster (low latency)
No cross-region egress fees
Example:
s3:
endpoint: https://fsn1.your-objectstorage.com # Production region
Multi-region setup (future):
s3:
endpoint: https://fsn1.your-objectstorage.com
backupEndpoint: https://hel1.your-objectstorage.com # DR region
s3.region¶
Type: string
Default: fsn1
Required: Yes
Values: fsn1, nbg1, hel1 (Hetzner region codes)
S3 region parameter (must match endpoint).
Example:
s3:
region: fsn1 # Matches endpoint: https://fsn1.your-objectstorage.com
s3.backupEndpoint¶
Type: string (URL with protocol)
Default: https://fsn1.your-objectstorage.com
Required: Yes
S3 endpoint for backup buckets (backups, postgresbackups).
Current setup: Same as production endpoint (fsn1).
Future improvement: Use different region for geographic redundancy.
Example:
s3:
backupEndpoint: https://hel1.your-objectstorage.com # DR region
s3.backupRegion¶
Type: string
Default: fsn1
Required: Yes
S3 region for backup buckets (must match backupEndpoint).
Example:
s3:
backupRegion: hel1 # Matches backupEndpoint
s3.buckets¶
S3 bucket name configuration
All bucket names follow pattern: {purpose}-gitlabbda-kup6s
Why this pattern:
{purpose}- Descriptive (artifacts, uploads, etc.)gitlabbda- Deployment identifierkup6s- Cluster identifier (ensures global uniqueness)
Global uniqueness: Hetzner S3 bucket names are globally unique (like AWS). Generic names (“backup”, “artifacts”) will fail.
s3.buckets.artifacts¶
Type: string (bucket name)
Default: artifacts-gitlabbda-kup6s
Required: Yes
GitLab CI/CD artifacts storage.
Contents:
Build outputs (compiled binaries, test results)
Job logs
Coverage reports
Test artifacts
Lifecycle: Artifacts expire based on GitLab settings (default: 30 days).
s3.buckets.uploads¶
Type: string (bucket name)
Default: uploads-gitlabbda-kup6s
Required: Yes
User uploads and attachments.
Contents:
Issue attachments (images, PDFs)
Merge request comments (screenshots)
Wiki uploads
Lifecycle: Never expires (user data).
s3.buckets.lfs¶
Type: string (bucket name)
Default: lfs-gitlabbda-kup6s
Required: Yes
Git LFS (Large File Storage) objects.
Contents:
Large files tracked by git (videos, datasets, binaries)
LFS pointer files in git, actual files in S3
Lifecycle: Never expires (referenced by git).
s3.buckets.pages¶
Type: string (bucket name)
Default: pages-gitlabbda-kup6s
Required: Yes
GitLab Pages static sites.
Contents:
HTML, CSS, JavaScript files
Images, fonts, static assets
Lifecycle: Expires when project deleted or pages disabled.
s3.buckets.registry¶
Type: string (bucket name)
Default: registry-gitlabbda-kup6s
Required: Yes
Harbor container registry storage.
Contents:
OCI image layers
Image manifests
Image tags
Lifecycle: Manual (garbage collection via Harbor UI).
s3.buckets.backups¶
Type: string (bucket name)
Default: backups-gitlabbda-kup6s
Required: Yes
GitLab application backups (via Toolbox).
Contents:
Database dumps
Repository archives
Uploads backup
LFS backup
Lifecycle: Retention policy (e.g., keep 7 daily, 4 weekly, 3 monthly).
s3.buckets.postgresbackups¶
Type: string (bucket name)
Default: postgresbackups-gitlabbda-kup6s
Required: Yes
PostgreSQL CNPG backups (via Barman Cloud Plugin).
Contents:
WAL archives (continuous)
Base backups (daily)
PITR metadata
Lifecycle: Retention policy 30 days (configurable in CNPG Cluster).
s3.buckets.cache¶
Type: string (bucket name)
Default: cache-gitlabbda-kup6s
Required: Yes
GitLab Runner build cache.
Contents:
Dependency caches (npm, pip, maven)
Build caches (faster subsequent builds)
Lifecycle: Auto-expire old cache (LRU, configurable).
For complete S3 bucket details, see S3 Buckets Reference.
smtp¶
Email delivery configuration (Mailjet)
GitLab sends emails via SMTP (notifications, password resets).
smtp.host¶
Type: string (hostname)
Default: in-v3.mailjet.com
Required: Yes
SMTP server hostname.
Example:
smtp:
host: in-v3.mailjet.com # Mailjet
# or
host: smtp.gmail.com # Gmail
# or
host: smtp.sendgrid.net # SendGrid
smtp.port¶
Type: number
Default: 587
Required: Yes
Common values: 25 (plain), 587 (STARTTLS), 465 (SSL/TLS)
SMTP port.
Security:
Port 587 = STARTTLS (recommended)
Port 465 = Implicit TLS
Port 25 = Plain (not recommended)
smtp.domain¶
Type: string (DNS domain)
Default: bluedynamics.eu
Required: Yes
Email domain for HELO/EHLO command.
Example:
smtp:
domain: example.com # Match your email domain
smtp.from¶
Type: string (email address)
Default: gitlab@bluedynamics.eu
Required: Yes
Sender email address (FROM header).
Example:
smtp:
from: gitlab@example.com
# or
from: noreply@example.com
smtp.replyTo¶
Type: string (email address)
Default: noreply@bluedynamics.eu
Required: Yes
Reply-To email address.
Example:
smtp:
replyTo: noreply@example.com
SMTP credentials: Stored in application-secrets namespace (not in config.yaml). See Secrets Reference.
storage¶
Persistent storage configuration
storage.longhorn.storageClass¶
Type: string
Default: longhorn
Required: Yes
Longhorn storage class name.
Available classes:
longhorn-redundant-app- 1 replica (for apps with built-in replication)longhorn- 2 replicas (default)longhorn-ha- 3 replicas (mission-critical)
Why longhorn as default:
Redis: Single instance, needs storage-level redundancy (2 replicas)
Example:
storage:
longhorn:
storageClass: longhorn # 2 replicas
For storage class selection, see Storage Architecture.
storage.longhorn.repositories¶
Type: string (Kubernetes quantity)
Default: 20Gi
Required: Yes
Note: Currently UNUSED (Gitaly uses hcloud-volumes, not Longhorn)
Reserved for future use if switching Gitaly to Longhorn.
Current Gitaly storage: 20Gi Hetzner Cloud Volume (configured in GitLab Helm chart).
storage.longhorn.redis¶
Type: string (Kubernetes quantity)
Default: 10Gi
Required: Yes
Redis PVC size.
Contents:
AOF (append-only file)
Snapshots
Job queue data
Sizing guidance:
2-5 users: 10Gi (sufficient)
10-20 users: 20Gi
50+ users: 50Gi
Resizing: Can increase (expand PVC), cannot decrease.
storage.longhorn.postgresql¶
Type: string (Kubernetes quantity)
Default: 10Gi
Required: Yes
PostgreSQL PVC size (per instance).
Actual storage used:
2 CNPG instances × 10Gi = 20Gi total (logical)
Longhorn
longhorn-redundant-app(1 replica) = 20Gi cluster storage
Contents:
GitLab database (
gitlab)Harbor database (
harbor)WAL segments
Indexes
Sizing guidance:
2-5 users, 10-50 projects: 10Gi per instance
10-20 users, 100-500 projects: 20Gi per instance
50+ users, 1000+ projects: 50Gi+ per instance
For resource sizing, see Resource Requirements Reference.
certManager¶
TLS certificate configuration
certManager.clusterIssuer¶
Type: string
Default: letsencrypt-cluster-issuer
Required: Yes
Cluster-wide cert-manager ClusterIssuer name.
Pre-requisite: ClusterIssuer must exist in cluster (managed by cluster infrastructure).
What it does:
Ingress annotations reference this issuer
Cert-manager automatically provisions Let’s Encrypt certificates
TLS secrets created in deployment namespace
Example:
certManager:
clusterIssuer: letsencrypt-cluster-issuer # Production LE
# or
clusterIssuer: letsencrypt-staging # LE staging (testing)
For cert-manager details, see Main Cluster Docs: cert-manager.
replicas¶
Pod replica count configuration
All replica fields are number type, minimum 1.
replicas.webservice¶
Type: number
Default: 2
Minimum: 2 (HA requirement)
GitLab Webservice replicas.
Why 2:
High availability (zero-downtime deployments)
Load balancing (distribute HTTP requests)
Scaling:
2-5 users: 2 replicas
10-20 users: 3 replicas
50+ users: 4+ replicas
replicas.workhorse¶
Type: number
Default: 2
Minimum: 2
GitLab Workhorse replicas (should match webservice).
replicas.shell¶
Type: number
Default: 2
Minimum: 1
GitLab Shell replicas.
Why 2: High availability for SSH git operations.
Scaling: Rarely needs more than 2 (SSH traffic low).
replicas.sidekiq¶
Type: number
Default: 1
Minimum: 1
Sidekiq background job processors.
Scaling:
2-5 users: 1 replica (sufficient)
10-20 users: 2 replicas (if job queue depth > 100)
50+ users: 3+ replicas
replicas.gitaly¶
Type: number
Default: 1
Fixed: 1 (stateful, single PVC)
Gitaly git server.
Why always 1: Stateful pod bound to single PVC (cannot scale horizontally without sharding).
Scaling: Use Gitaly Cluster (Praefect) for horizontal scaling (future, 50+ users).
replicas.pages¶
Type: number
Default: 1
Minimum: 1
GitLab Pages server.
Scaling: Rarely needs more than 1 (static files from S3, low CPU).
replicas.redis¶
Type: number
Default: 1
Fixed: 1 (until Redis Sentinel implemented)
Redis instance.
Why 1: Single instance sufficient for 2-5 users.
Future: Redis Sentinel (3 instances) for HA when scaling.
replicas.runners¶
Type: number
Default: 3
Minimum: 1
GitLab Runner instances.
Why 3: Can run 3 concurrent CI jobs.
Scaling:
Light CI usage (1-5 jobs/day): 1 runner
Moderate (10-50 jobs/day): 3 runners (default)
Heavy (100+ jobs/day): 5+ runners
For scaling guidelines, see GitLab Components: Scaling Considerations.
Configuration Examples¶
Minimal Production Configuration¶
deployment:
isFreshInstall: false # After initial deployment
versions:
gitlab: v18.5.1
gitlabRunner: alpine-v18.5.0
harbor: v2.14.0
namespace: gitlabbda
domains:
gitlab: gitlab.example.com
pages: pages.example.com
harbor: registry.example.com
s3:
endpoint: https://fsn1.your-objectstorage.com
region: fsn1
backupEndpoint: https://hel1.your-objectstorage.com # DR region
backupRegion: hel1
buckets:
artifacts: artifacts-gitlabbda-example
uploads: uploads-gitlabbda-example
lfs: lfs-gitlabbda-example
pages: pages-gitlabbda-example
registry: registry-gitlabbda-example
backups: backups-gitlabbda-example
postgresbackups: postgresbackups-gitlabbda-example
cache: cache-gitlabbda-example
smtp:
host: in-v3.mailjet.com
port: 587
domain: example.com
from: gitlab@example.com
replyTo: noreply@example.com
storage:
longhorn:
storageClass: longhorn
repositories: 20Gi
redis: 10Gi
postgresql: 10Gi
certManager:
clusterIssuer: letsencrypt-cluster-issuer
replicas:
webservice: 2
workhorse: 2
shell: 2
sidekiq: 1
gitaly: 1
pages: 1
redis: 1
runners: 3
Scaled Configuration (10-20 users)¶
# Increased resources for 10-20 concurrent users
storage:
longhorn:
storageClass: longhorn
repositories: 20Gi # Unchanged (Gitaly on Hetzner Volumes)
redis: 20Gi # 2× (more cache)
postgresql: 20Gi # 2× (larger database)
replicas:
webservice: 3 # +1 for more HTTP capacity
workhorse: 3 # Match webservice
shell: 2 # Unchanged (SSH traffic still low)
sidekiq: 2 # +1 for job processing
gitaly: 1 # Unchanged (stateful)
pages: 1 # Unchanged (static files)
redis: 1 # Unchanged (single instance until Sentinel)
runners: 5 # +2 for more concurrent CI jobs
Environment Variable Overrides¶
All configuration values can be overridden by environment variables:
# Override GitLab version
export GITLAB_VERSION=v18.6.0
# Override namespace
export NAMESPACE=gitlab-production
# Override domain
export GITLAB_DOMAIN=gitlab.production.example.com
Variable naming convention: {SECTION}_{FIELD} in UPPERCASE.
For complete environment variable reference, see Environment Variables Reference.
Validation¶
Configuration is validated at build time (TypeScript type checking):
// charts/gitlab-chart.ts
export interface GitLabConfig {
deployment: {
isFreshInstall: boolean; // Type checked
};
versions: {
gitlab: string; // Must be string
// ...
};
// ...
}
Invalid configuration fails build:
npm run build
# Error: Type 'number' is not assignable to type 'string'
# versions.gitlab: 18 ← Invalid (must be string)
For CDK8S type safety, see CDK8S Approach.
Summary¶
Configuration file: config.yaml
Key sections:
deployment - Deployment mode (fresh install vs upgrade)
versions - Component versions (GitLab, Runner, Harbor)
domains - Public DNS (gitlab, pages, harbor)
s3 - Object storage (8 buckets, endpoints, regions)
storage - Persistent storage (Longhorn PVC sizes)
replicas - Pod replica counts (webservice, sidekiq, runners, etc.)
Related documentation:
S3 Buckets Reference - Detailed bucket specifications
Environment Variables Reference - Override configuration via env vars
Resource Requirements Reference - Sizing guidance
Version Compatibility Reference - Component version matrix