How-To
Create a PostgreSQL Cluster¶
Create a new PostgreSQL cluster using CloudNativePG with automated backups to S3.
Prerequisites¶
CloudNativePG operator installed (check:
kubectl get pods -n cnpg-system)S3 credentials secret configured in your namespace
Crossplane S3 bucket created for backups
Step 1: Create S3 Bucket for Backups¶
apiVersion: s3.aws.upbound.io/v1beta1
kind: Bucket
metadata:
name: myapp-postgres-backup-kup6s
namespace: myapp
spec:
deletionPolicy: Delete
managementPolicies:
- Observe
- Create
- Delete
forProvider:
region: fsn1
providerConfigRef:
name: hetzner-s3
Apply the bucket:
kubectl apply -f bucket.yaml
Verify bucket is ready:
kubectl get bucket myapp-postgres-backup-kup6s -n myapp
# Should show SYNCED=True, READY=True
Step 2: Create S3 Credentials Secret¶
Use ExternalSecret to replicate shared S3 credentials to your namespace:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: myapp-postgres-s3-creds
namespace: myapp
spec:
refreshInterval: 1h
secretStoreRef:
name: hetzner-s3-cluster-store
kind: ClusterSecretStore
target:
name: myapp-postgres-s3-creds
template:
engineVersion: v2
data:
ACCESS_KEY_ID: "{{ .access_key_id }}"
SECRET_ACCESS_KEY: "{{ .secret_access_key }}"
dataFrom:
- extract:
key: hetzner-s3-credentials
Apply and verify:
kubectl apply -f externalsecret.yaml
kubectl get secret myapp-postgres-s3-creds -n myapp
Step 3: Create PostgreSQL Cluster¶
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: myapp-postgres
namespace: myapp
spec:
instances: 2 # High availability with 2 instances
postgresql:
parameters:
max_connections: "200"
shared_buffers: "256MB"
bootstrap:
initdb:
database: myapp
owner: myapp
secret:
name: myapp-postgres-superuser
storage:
size: 10Gi
storageClass: longhorn-redundant-app # 1 replica (app has replication)
backup:
barmanObjectStore:
destinationPath: s3://myapp-postgres-backup-kup6s/
endpointURL: https://fsn1.your-objectstorage.com
s3Credentials:
accessKeyId:
name: myapp-postgres-s3-creds
key: ACCESS_KEY_ID
secretAccessKey:
name: myapp-postgres-s3-creds
key: SECRET_ACCESS_KEY
retentionPolicy: "30d"
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
affinity:
podAntiAffinityType: preferred # Spread instances across nodes
monitoring:
enablePodMonitor: true # Prometheus metrics
Apply the cluster:
kubectl apply -f cluster.yaml
Step 4: Create Connection Pooler (Optional)¶
PgBouncer pooler for connection pooling:
apiVersion: postgresql.cnpg.io/v1
kind: Pooler
metadata:
name: myapp-postgres-pooler
namespace: myapp
spec:
cluster:
name: myapp-postgres
instances: 2
type: rw # Read-write pooler
pgbouncer:
poolMode: session
parameters:
max_client_conn: "1000"
default_pool_size: "25"
Apply the pooler:
kubectl apply -f pooler.yaml
Step 5: Verify Cluster Status¶
Check cluster is healthy:
# Check cluster status
kubectl get cluster myapp-postgres -n myapp
# Expected output:
# NAME AGE INSTANCES READY STATUS PRIMARY
# myapp-postgres 2m 2 2 Cluster in healthy state myapp-postgres-1
# Check pods
kubectl get pods -n myapp -l cnpg.io/cluster=myapp-postgres
# Check first backup completed
kubectl get backup -n myapp
Step 6: Connect to PostgreSQL¶
Get connection credentials:
# Get superuser password
kubectl get secret myapp-postgres-superuser -n myapp -o jsonpath='{.data.password}' | base64 -d
# Get app user password
kubectl get secret myapp-postgres-app -n myapp -o jsonpath='{.data.password}' | base64 -d
Connection strings:
# Direct connection to primary
postgresql://myapp:<password>@myapp-postgres-rw.myapp.svc.cluster.local:5432/myapp
# Connection via pooler (recommended)
postgresql://myapp:<password>@myapp-postgres-pooler-rw.myapp.svc.cluster.local:5432/myapp
# Read-only connection
postgresql://myapp:<password>@myapp-postgres-ro.myapp.svc.cluster.local:5432/myapp
Test connection:
kubectl run -it --rm psql --image=postgres:16 --restart=Never -- \
psql postgresql://myapp:<password>@myapp-postgres-pooler-rw.myapp.svc:5432/myapp
Troubleshooting¶
Cluster Not Starting¶
Check cluster events:
kubectl describe cluster myapp-postgres -n myapp
Check pod logs:
kubectl logs myapp-postgres-1 -n myapp
Backup Failures¶
Check backup job logs:
kubectl logs -n myapp -l job-name=myapp-postgres-1-full-backup
Verify S3 credentials:
kubectl get secret myapp-postgres-s3-creds -n myapp -o yaml
Connection Issues¶
Verify service exists:
kubectl get svc -n myapp -l cnpg.io/cluster=myapp-postgres
Check pooler status:
kubectl get pooler myapp-postgres-pooler -n myapp
kubectl logs -n myapp -l cnpg.io/pooler=myapp-postgres-pooler