Reference
Endpoints & Ports¶
Overview¶
This document lists all network endpoints and ports used by GitLab BDA components.
Access levels:
External - Accessible from internet via ingress
Internal - Only accessible within Kubernetes cluster
Node - Accessible via node IP (NodePort)
External Endpoints (HTTPS)¶
All external endpoints use TLS (Let’s Encrypt certificates via cert-manager).
GitLab UI/API¶
URL: https://gitlab.staging.bluedynamics.eu
Service: gitlab-webservice
Port: 8080 (internal) → 443 (external)
Ingress: Traefik
TLS Secret: gitlab-tls
Endpoints:
/- Web UI/api/v4/- REST API/oauth/- OAuth2 endpoints (for Harbor)/-/health- Health check endpoint/-/readiness- Readiness probe/-/liveness- Liveness probe
Example:
curl -I https://gitlab.staging.bluedynamics.eu/-/health
# Expected: HTTP/2 200
GitLab Pages¶
URL: https://*.pages.staging.bluedynamics.eu
Service: gitlab-pages
Port: 8090 (HTTP internal) → 443 (external)
Ingress: Traefik (wildcard ingress)
TLS Secret: gitlab-pages-tls (wildcard certificate)
Example:
curl -I https://myproject.pages.staging.bluedynamics.eu
# Expected: HTTP/2 200 (if page exists)
Harbor UI¶
URL: https://registry.staging.bluedynamics.eu
Service: harbor-portal (UI), harbor-core (API)
Port: 8080 (portal), 8080 (core) → 443 (external)
Ingress: Traefik
TLS Secret: harbor-tls
Endpoints:
/- Web UI/api/v2.0/- Harbor API v2/c/oidc/callback- OAuth2 callback (GitLab SSO)/service/token- Docker registry token endpoint
Example:
curl -I https://registry.staging.bluedynamics.eu/api/v2.0/systeminfo
# Expected: HTTP/2 200 (requires auth)
Harbor Registry (Docker)¶
URL: https://registry.staging.bluedynamics.eu
Service: harbor-registry
Port: 5000 (internal) → 443 (external)
Ingress: Traefik (same as Harbor UI, path-based routing)
TLS Secret: harbor-tls
Docker Registry API v2 endpoints:
/v2/- API base (version check)/v2/<name>/manifests/<tag>- Image manifests/v2/<name>/blobs/<digest>- Image layers
Example:
# Check registry API
curl -I https://registry.staging.bluedynamics.eu/v2/
# Expected: HTTP/2 401 (unauthorized, normal for Docker registry)
# Docker login
docker login registry.staging.bluedynamics.eu
# Username: <gitlab-username>
# Password: <gitlab-personal-access-token>
GitLab SSH (Git operations)¶
URL: ssh://gitlab.staging.bluedynamics.eu:22
Service: gitlab-shell
Port: 22 (SSH) → 22 (external)
Ingress: Traefik TCP route (IngressRouteTCP)
Protocol: SSH (not HTTPS)
Usage:
# Clone via SSH
git clone git@gitlab.staging.bluedynamics.eu:group/project.git
# Test SSH connection
ssh -T git@gitlab.staging.bluedynamics.eu
# Expected: Welcome to GitLab, @username!
Internal Endpoints (Cluster-only)¶
GitLab Webservice (Internal API)¶
Service: gitlab-webservice
Port: 8080 (HTTP)
Access: Other GitLab components only
Used by:
GitLab Shell (SSH auth)
GitLab Pages (project lookup)
GitLab Runner (CI/CD job API)
Example:
kubectl run -it curl-test --image=curlimages/curl --rm \
-- curl -I http://gitlab-webservice.gitlabbda.svc:8080/-/health
GitLab Gitaly (Git RPC)¶
Service: gitlab-gitaly
Port: 8075 (gRPC)
Access: GitLab Webservice, Sidekiq, Toolbox only
Protocol: gRPC (Git operations over RPC)
Used by:
Webservice (git clone, diff, blame)
Sidekiq (repository maintenance)
Toolbox (backup operations)
Example (test Gitaly health):
kubectl exec -it gitlab-gitaly-0 -n gitlabbda -- \
grpc_health_probe -addr=localhost:8075
PostgreSQL Pooler (PgBouncer)¶
Service: gitlab-postgres-pooler
Port: 5432 (PostgreSQL)
Access: GitLab, Harbor only
Connection string:
postgresql://app:<password>@gitlab-postgres-pooler.gitlabbda.svc:5432/gitlab
Used by:
GitLab Webservice (application queries)
GitLab Sidekiq (background jobs)
GitLab Toolbox (migrations, backups)
Harbor Core (Harbor metadata)
Example:
kubectl exec -it gitlab-postgres-1 -n gitlabbda -- \
psql -h gitlab-postgres-pooler -U app -d gitlab -c '\conninfo'
PostgreSQL Primary (Direct)¶
Service: gitlab-postgres-rw (read-write)
Port: 5432 (PostgreSQL)
Access: CNPG operator, pooler only (NOT applications)
Why not direct access?
Connection pooling required (prevent connection exhaustion)
Applications MUST use pooler (
gitlab-postgres-pooler)
Example:
kubectl exec -it gitlab-postgres-1 -n gitlabbda -- \
psql -h gitlab-postgres-rw -U postgres -c 'SELECT version()'
Redis¶
Service: redis
Port: 6379 (Redis)
Access: GitLab, Harbor only
Connection string:
redis://redis.gitlabbda.svc:6379/<db>
Databases:
DB 0 - GitLab cache
DB 1 - GitLab job queue
DB 2 - Harbor metadata + registry cache
Example:
kubectl exec -it redis-0 -n gitlabbda -- redis-cli ping
# Expected: PONG
Monitoring Endpoints (Metrics)¶
GitLab Webservice Metrics¶
Service: gitlab-webservice
Port: 8083 (HTTP)
Path: /metrics
Format: Prometheus
Metrics:
http_request_duration_seconds- Request latencyhttp_requests_total- Request count by statusruby_gc_duration_seconds_total- Ruby GC time
Example:
kubectl exec -it deploy/gitlab-webservice -n gitlabbda -- \
curl http://localhost:8083/metrics | grep http_requests_total
GitLab Sidekiq Metrics¶
Service: gitlab-sidekiq
Port: 3807 (HTTP)
Path: /metrics
Metrics:
sidekiq_jobs_processed_total- Jobs processedsidekiq_jobs_failed_total- Jobs failedsidekiq_queue_size- Queue depth
GitLab Gitaly Metrics¶
Service: gitlab-gitaly
Port: 9236 (HTTP)
Path: /metrics
Metrics:
gitaly_service_client_requests_total- Git operationsgitaly_repository_size_bytes- Repository storagegitaly_spawn_timeout_count- Git command timeouts
PostgreSQL Metrics (CNPG)¶
Service: gitlab-postgres-1 (and -2)
Port: 9187 (HTTP)
Path: /metrics
Metrics:
pg_up- PostgreSQL instance up/downpg_stat_database_*- Database statisticspg_replication_lag_seconds- Replication lag
Example:
kubectl exec -it gitlab-postgres-1 -n gitlabbda -- \
curl http://localhost:9187/metrics | grep pg_up
Port Reference (All Components)¶
GitLab Components¶
Component |
Service Name |
Port |
Protocol |
Purpose |
|---|---|---|---|---|
Webservice |
|
8080 |
HTTP |
Web UI + API |
8083 |
HTTP |
Metrics |
||
Workhorse |
(sidecar) |
8181 |
HTTP |
Git HTTP operations |
Sidekiq |
|
3807 |
HTTP |
Metrics |
Gitaly |
|
8075 |
gRPC |
Git RPC |
9236 |
HTTP |
Metrics |
||
Shell |
|
22 |
SSH |
Git SSH operations |
Pages |
|
8090 |
HTTP |
Static site serving |
Toolbox |
(no service) |
- |
- |
Admin tasks |
Runner |
(no service) |
- |
- |
CI/CD job executor |
External Dependencies¶
Component |
Service Name |
Port |
Protocol |
Purpose |
|---|---|---|---|---|
PostgreSQL |
|
5432 |
PostgreSQL |
Database (pooled) |
|
5432 |
PostgreSQL |
Database (direct) |
|
|
9187 |
HTTP |
Metrics |
|
Redis |
|
6379 |
Redis |
Cache + job queue |
Harbor Components¶
Component |
Service Name |
Port |
Protocol |
Purpose |
|---|---|---|---|---|
Core |
|
8080 |
HTTP |
Harbor API |
Registry |
|
5000 |
HTTP |
Docker registry |
JobService |
|
8080 |
HTTP |
Background jobs |
Portal |
|
8080 |
HTTP |
Web UI |
Ingress Configuration¶
Traefik HTTP/HTTPS Ingresses¶
# GitLab UI/API
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gitlab
spec:
tls:
- hosts: [gitlab.staging.bluedynamics.eu]
secretName: gitlab-tls
rules:
- host: gitlab.staging.bluedynamics.eu
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gitlab-webservice
port:
number: 8080
Traefik TCP Route (SSH)¶
# GitLab SSH
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: gitlab-ssh
spec:
entryPoints:
- ssh # Port 22
routes:
- match: HostSNI(`*`)
services:
- name: gitlab-shell
port: 22
Network Policies (Future)¶
Current: No NetworkPolicies (all pods can communicate)
Future: Implement NetworkPolicies for defense-in-depth
Example: Redis Access Control¶
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: redis-ingress
spec:
podSelector:
matchLabels:
app: redis
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/part-of: gitlab
- podSelector:
matchLabels:
app.kubernetes.io/part-of: harbor
ports:
- protocol: TCP
port: 6379
Effect: Only GitLab and Harbor pods can connect to Redis.
Firewall Rules (Cluster-level)¶
Hetzner Cloud Firewall (applied to all cluster nodes):
Port |
Protocol |
Source |
Destination |
Purpose |
|---|---|---|---|---|
22 |
TCP |
0.0.0.0/0 |
All nodes |
SSH access |
80 |
TCP |
0.0.0.0/0 |
Load balancer |
HTTP (redirect to HTTPS) |
443 |
TCP |
0.0.0.0/0 |
Load balancer |
HTTPS |
6443 |
TCP |
Authorized IPs |
Control plane |
Kubernetes API |
10250 |
TCP |
Cluster nodes |
All nodes |
Kubelet API |
For firewall configuration, see kube-hetzner/kube.tf.
Service Discovery (DNS)¶
Kubernetes Service DNS¶
Format: <service-name>.<namespace>.svc.cluster.local
Examples:
gitlab-webservice.gitlabbda.svc.cluster.local → 10.43.x.x
gitlab-postgres-pooler.gitlabbda.svc.cluster.local → 10.43.x.x
redis.gitlabbda.svc.cluster.local → 10.43.x.x
Short form (within same namespace):
gitlab-webservice → Resolves to full DNS
redis → Resolves to full DNS
External DNS¶
Format: <subdomain>.<domain>
Examples:
gitlab.staging.bluedynamics.eu → <load-balancer-ip>
registry.staging.bluedynamics.eu → <load-balancer-ip>
*.pages.staging.bluedynamics.eu → <load-balancer-ip>
DNS records (managed externally):
A/AAAA records point to Hetzner Load Balancer IP
Let’s Encrypt cert-manager uses DNS-01 or HTTP-01 challenge
Health Checks¶
GitLab Health Endpoints¶
# Liveness (is GitLab running?)
curl http://gitlab-webservice:8080/-/liveness
# Expected: {"status":"ok"}
# Readiness (is GitLab ready to serve traffic?)
curl http://gitlab-webservice:8080/-/readiness
# Expected: {"status":"ok"} or {"status":"failed","message":"..."}
# Health (detailed component status)
curl http://gitlab-webservice:8080/-/health
# Expected: {"status":"ok"}
Harbor Health Endpoints¶
# Harbor Core health
curl http://harbor-core:8080/api/v2.0/health
# Expected: {"status":"healthy","components":[...]}
# Registry health
curl http://harbor-registry:5000/
# Expected: {} (empty JSON response)
Database Health¶
# PostgreSQL primary
kubectl exec -it gitlab-postgres-1 -n gitlabbda -- \
psql -U postgres -c 'SELECT 1'
# Expected: 1
# Redis
kubectl exec -it redis-0 -n gitlabbda -- redis-cli ping
# Expected: PONG
Port Conflicts (Troubleshooting)¶
Check Port Bindings¶
# Check which service is listening on port
kubectl get svc -n gitlabbda -o wide
# Check pod ports
kubectl get pods -n gitlabbda -o json \
| jq '.items[].spec.containers[].ports'
Common Port Conflicts¶
Issue: Pod crash with “address already in use”
Diagnosis:
kubectl logs <pod-name> -n gitlabbda | grep "address already in use"
Solution: Check for duplicate port definitions in deployment specs.
Summary¶
External endpoints (HTTPS):
https://gitlab.staging.bluedynamics.eu- GitLab UI/APIhttps://*.pages.staging.bluedynamics.eu- GitLab Pageshttps://registry.staging.bluedynamics.eu- Harbor UI/Registryssh://gitlab.staging.bluedynamics.eu:22- GitLab SSH
Internal services:
gitlab-webservice:8080- GitLab API (cluster-only)gitlab-postgres-pooler:5432- PostgreSQL (applications)redis:6379- Redis (cache + queue)gitlab-gitaly:8075- Gitaly (Git RPC)
Metrics endpoints:
<service>:*/metrics- Prometheus metrics (various ports)
For troubleshooting, see Troubleshooting Reference. For kubectl commands, see Kubectl Commands Reference.