CLI Commands Reference¶
This reference provides all common commands for working with the kup6s.com infrastructure, organized by tool and workflow.
Quick Reference¶
Task |
Command |
|---|---|
Apply infrastructure changes |
|
Build ArgoCD applications |
|
Update monitoring stack |
|
Build documentation |
|
Access cluster |
|
Cluster Infrastructure (kube-hetzner/)¶
Environment Setup¶
First-time setup:
cd kube-hetzner
# Create .env file from template
cp .env.example .env
# Edit .env and add your credentials
vim .env # or nano, emacs, code, etc.
CRITICAL: The .env file contains all credentials (S3, API tokens, etc.). It must be sourced before running OpenTofu commands.
OpenTofu Commands¶
Apply Infrastructure Changes¶
⚠️ MANDATORY PROCEDURE for kube.tf Changes:
cd kube-hetzner
bash scripts/apply-and-configure-longhorn.sh
NEVER run tofu apply directly for kube.tf changes!
Why this is mandatory:
Longhorn requires specific node storage configuration after cluster changes
Without proper configuration, Longhorn will be misconfigured with wrong storage reservation
The script automatically configures all Longhorn nodes with correct 15GB fixed storage reservation
Manual
tofu applyleaves Longhorn in broken state requiring manual fixes
What the script does:
Sources
.envfile (bash export format)Runs
tofu apply -auto-approveWaits for Longhorn to stabilize after infrastructure changes
Configures all Longhorn nodes with fixed 15GB storage reservation
Shows storage configuration summary for verification
Other OpenTofu Operations¶
Using Bash (recommended):
cd kube-hetzner
# Source environment variables
set -a # Enable auto-export
source .env
set +a # Disable auto-export
# Initialize OpenTofu
tofu init
# Plan infrastructure changes (view only, safe)
tofu plan
# View current state
tofu show
# Destroy infrastructure (DANGER: deletes cluster!)
tofu destroy
Using Fish Shell (with dotenv plugin):
cd kube-hetzner
# Initialize OpenTofu
fish -c "dotenv .env; and tofu init"
# Plan infrastructure changes (view only)
fish -c "dotenv .env; and tofu plan"
# View current state
fish -c "dotenv .env; and tofu show"
# Destroy infrastructure (use with caution!)
fish -c "dotenv .env; and tofu destroy"
Cluster Access¶
Configure kubectl:
# Set KUBECONFIG environment variable
export KUBECONFIG=kube-hetzner/kup6s_kubeconfig.yaml
# Verify access
kubectl get nodes
# Alternatively, use --kubeconfig flag
kubectl --kubeconfig=kube-hetzner/kup6s_kubeconfig.yaml get nodes
Make it permanent (optional):
# Add to ~/.bashrc or ~/.zshrc
echo 'export KUBECONFIG=~/kup6s/kube-hetzner/kup6s_kubeconfig.yaml' >> ~/.bashrc
source ~/.bashrc
SSH Access to Nodes¶
# SSH directly to any node
ssh root@<node-ip>
# Get node IPs
kubectl get nodes -o wide
# Example
ssh root@65.21.123.45
SSH Key: The private key used for cluster provisioning is specified in .env (TF_VAR_private_key).
ArgoCD Applications (argoapps/)¶
Prerequisites¶
Node Version Manager (nvm):
# Install nvm (if not already installed)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
# Source nvm (or restart shell)
source ~/.nvm/nvm.sh
# Install Node.js v22
nvm install v22.21.0
nvm use v22.21.0
IMPORTANT: This project requires nvm for proper npx support.
Development Workflow¶
All commands from argoapps/ directory:
cd argoapps
# Ensure correct Node version
source ~/.nvm/nvm.sh
nvm use v22.21.0 # Or whatever version is installed
# Install dependencies (first time or after package.json changes)
npm install
# Compile TypeScript to JavaScript
npm run compile
# Watch mode (auto-compile on changes, useful during development)
npm run watch
# Synthesize K8S manifests from charts
npm run synth
# Run tests
npm test
# Full build (compile + test + synth)
npm run build
# Deploy generated manifests to cluster
kubectl apply -f dist/
# Update K8S API types (after K8S version upgrade)
npm run import
# Upgrade CDK8S dependencies
npm run upgrade
Adding External Private Repositories¶
1. Create Deploy Token in external repository (GitLab/GitHub):
Scope:
read_repository(read-only)Save username and token
2. Add repository credentials to ArgoCD:
# Store credentials in argoapps/secrets/ (git-ignored)
# Example: argoapps/secrets/myapp.sh
kubectl create secret generic myapp-repo-creds \
-n argocd \
--from-literal=type=git \
--from-literal=url=https://git.example.com/org/repo.git \
--from-literal=username=deploy-token-username \
--from-literal=password=deploy-token \
--dry-run=client -o yaml | kubectl apply -f -
kubectl label secret myapp-repo-creds \
-n argocd \
argocd.argoproj.io/secret-type=repository \
--overwrite
3. Create ArgoCD Application in argoapps/apps/:
// apps/infra/myapp.ts
import { Chart, ChartProps } from 'cdk8s';
import { Construct } from 'constructs';
import * as argo from '@opencdk8s/cdk8s-argocd-resources';
export class AppChart extends Chart {
constructor(scope: Construct, id: string, props: ChartProps = {}) {
super(scope, id, props);
new argo.ArgoCdApplication(this, 'app', {
metadata: {
namespace: 'argocd',
},
spec: {
project: 'default',
source: {
repoURL: 'https://git.example.com/org/repo.git',
path: 'manifests', // Path to K8S manifests in external repo
targetRevision: 'main',
},
destination: {
server: 'https://kubernetes.default.svc',
namespace: 'myapp',
},
syncPolicy: {
automated: {
prune: true,
selfHeal: true,
},
syncOptions: [
'CreateNamespace=true',
'ApplyOutOfSyncOnly=true',
],
},
},
});
}
}
4. Register in apps/registry.ts and build:
npm run build
kubectl apply -f dist/myapp.k8s.yaml
External CDK8S Repository Setup¶
If the external repo uses CDK8S, ensure it outputs manifests to a directory ArgoCD can read:
# cdk8s.yaml in external repo
language: typescript
app: npx ts-node main.ts
output: manifests/ # ArgoCD reads from this directory
Important: Generated manifests MUST be committed to git (ArgoCD reads from git, not local builds).
Infrastructure Deployments (dp-infra/)¶
All infrastructure applications (monitoring, GitLab BDA, cnpg, mailu, etc.) use the same CDK8S development workflow.
Standard Workflow¶
Example: Working with Monitoring Stack (dp-infra/monitoring/):
cd dp-infra/monitoring
# Install dependencies (first time)
npm install
# Full build (compile + test + synth)
npm run build
# Generated manifests appear in manifests/monitoring.k8s.yaml
# Commit manifests to git (required for ArgoCD sync)
git add manifests/monitoring.k8s.yaml config.yaml
git commit -m "Update monitoring configuration"
git push
# ArgoCD automatically syncs (if auto-sync enabled)
# Or manually sync:
argocd app sync monitoring
Same workflow applies to:
dp-infra/cnpg/- CloudNativePG operatordp-infra/gitlabbda/- GitLab BDA platformdp-infra/mailu/- Mailu mail serverAny other dp-infra subdirectory
Configuration Changes¶
1. Edit config.yaml for centralized settings:
# Example: Scale Prometheus in dp-infra/monitoring/config.yaml
resources:
prometheus:
requests:
cpu: 200m # Doubled
memory: 3Gi # Doubled
2. Regenerate manifests:
npm run build
3. Review changes:
git diff manifests/monitoring.k8s.yaml
4. Commit and deploy:
git add config.yaml manifests/
git commit -m "Scale Prometheus resources"
git push
# Manual sync (if auto-sync disabled):
argocd app sync monitoring
Construct Development¶
For advanced changes, edit TypeScript constructs in charts/ directory.
Example: Monitoring stack constructs (dp-infra/monitoring/charts/):
prometheus-construct.ts- Prometheus with Thanos sidecarthanos-query-construct.ts- Thanos Query federationthanos-store-construct.ts- Thanos Store gatewaythanos-compactor-construct.ts- Thanos Compactorloki-construct.ts- Loki SimpleScalable deploymentgrafana-construct.ts- Grafana with datasourcesalloy-construct.ts- Alloy log collectoralertmanager-construct.ts- Alertmanager with routings3-buckets-construct.ts- S3 bucket provisionings3-secrets-construct.ts- ExternalSecret for S3 credentialsnamespace-construct.ts- Namespace with labels
Testing Changes¶
# Run type checking (faster than full build)
npm run compile
# Run tests (if available)
npm test
# Synthesize without building (skips compile and test)
npm run synth
# Full build with all checks
npm run build
Documentation (documentation/)¶
All commands from documentation/ directory:
cd documentation
# Build HTML documentation
make docs
# Live-reload development server (http://localhost:8000)
make docs-live
# Clean generated files
make clean
Viewing Documentation:
Build:
make docs→ outputs todocumentation/_build/html/Live:
make docs-live→ opens http://localhost:8000 (auto-refreshes on changes)
ArgoCD CLI¶
Installation¶
# Download latest version
curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
# Install binary
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64
Login¶
# Get admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
# Login via CLI
argocd login argocd.ops.kup6s.net --username admin --password <password>
# Or use port-forward (if ingress not available)
kubectl port-forward svc/argocd-server -n argocd 8080:443
argocd login localhost:8080 --username admin --insecure
Common Operations¶
# List all applications
argocd app list
# Get application details
argocd app get <app-name>
# Sync application (deploy changes from git)
argocd app sync <app-name>
# Sync with prune (remove resources not in git)
argocd app sync <app-name> --prune
# Refresh application (check for drift)
argocd app refresh <app-name>
# Get application logs
argocd app logs <app-name>
# Delete application (remove from cluster)
argocd app delete <app-name>
# Set auto-sync
argocd app set <app-name> --sync-policy automated
# Disable auto-sync
argocd app set <app-name> --sync-policy none
kubectl Common Operations¶
Cluster Health¶
# Check cluster nodes
kubectl get nodes
# Check node details
kubectl get nodes -o wide
# Describe node
kubectl describe node <node-name>
# Check pod status across all namespaces
kubectl get pods -A
# Check failing pods
kubectl get pods -A | grep -v Running
# Check recent events
kubectl get events -A --sort-by='.lastTimestamp' | tail -20
Application Health¶
# Check Longhorn storage health
kubectl get nodes.longhorn.io -n longhorn-system
# Check PostgreSQL clusters
kubectl get clusters.postgresql.cnpg.io -A
# Check ArgoCD applications
kubectl get applications -n argocd
# Check External Secrets Operator
kubectl get pods -n external-secrets
# Check ExternalSecrets and SecretStores
kubectl get externalsecrets,secretstores,clustersecretstores -A
# Check Loki health
kubectl get pods -n monitoring | grep loki
# Check Thanos components
kubectl get pods -n monitoring -l 'app.kubernetes.io/name in (thanos-query,thanos-store,thanos-compactor)'
Logs and Debugging¶
# View pod logs
kubectl logs <pod-name> -n <namespace>
# Follow logs (tail -f equivalent)
kubectl logs <pod-name> -n <namespace> -f
# Previous container logs (after crash)
kubectl logs <pod-name> -n <namespace> --previous
# Multiple containers in pod
kubectl logs <pod-name> -n <namespace> -c <container-name>
# Exec into pod
kubectl exec -it <pod-name> -n <namespace> -- /bin/bash
# Port forward
kubectl port-forward -n <namespace> <pod-name> 8080:80
Resource Management¶
# Get resource usage
kubectl top nodes
kubectl top pods -A
# Get resource requests/limits
kubectl describe nodes | grep -A 5 "Allocated resources"
# Scale deployment
kubectl scale deployment <deployment-name> -n <namespace> --replicas=3
# Restart deployment (rolling restart)
kubectl rollout restart deployment <deployment-name> -n <namespace>
# Check rollout status
kubectl rollout status deployment <deployment-name> -n <namespace>
Git Workflow¶
Committing Changes¶
OpenTofu changes (kube-hetzner/):
cd kube-hetzner
git add kube.tf extra-manifests/
git commit -m "feat: add monitoring stack to infrastructure"
git push
ArgoCD Application changes (argoapps/):
cd argoapps
npm run build
git add apps/ dist/
git commit -m "feat: add myapp ArgoCD application"
git push
Infrastructure deployment changes (dp-infra/):
cd dp-infra/monitoring
npm run build
git add manifests/ config.yaml charts/
git commit -m "feat: scale Prometheus resources"
git push
# ArgoCD auto-syncs (or manual sync)
argocd app sync monitoring
Documentation changes:
cd documentation
git add sources/
git commit -m "docs: add troubleshooting guide for Loki"
git push
Environment Variables¶
Required for OpenTofu¶
See .env.example in kube-hetzner/ for complete list. Key variables:
# Hetzner Cloud
TF_VAR_hcloud_token="<hetzner-api-token>"
# SSH Keys
TF_VAR_private_key="/path/to/private/key"
TF_VAR_public_key="ssh-ed25519 AAAA..."
# S3 Credentials (Hetzner Object Storage)
TF_VAR_hetzner_s3_access_key="<access-key>"
TF_VAR_hetzner_s3_secret_key="<secret-key>"
# S3 Endpoints
TF_VAR_etcdbackup_s3_endpoint="hel1.your-objectstorage.com" # No protocol!
TF_VAR_production_s3_endpoint="https://fsn1.your-objectstorage.com" # With protocol!
# Longhorn Backup (CIFS/SMB)
TF_VAR_longhorn_cifs_url="cifs://u123456-sub1.your-storagebox.de/u123456-sub1"
TF_VAR_longhorn_cifs_username="u123456-sub1"
TF_VAR_longhorn_cifs_password="<password>"
CRITICAL Format Notes:
etcd backup endpoint: WITHOUT
https://protocolProduction S3 endpoint: WITH
https://protocolCIFS URL: MUST include
cifs://protocol prefix
Required for kubectl¶
# Cluster access
export KUBECONFIG=kube-hetzner/kup6s_kubeconfig.yaml
Monitoring Endpoints¶
User-facing UIs:
Grafana: https://grafana.ops.kup6s.net
ArgoCD: https://argocd.ops.kup6s.net
Longhorn UI: https://longhorn.ops.kup6s.net
Internal services (use port-forward):
# Prometheus
kubectl port-forward -n monitoring svc/prometheus 9090:9090
# Access at http://localhost:9090
# Thanos Query
kubectl port-forward -n monitoring svc/thanos-query 9090:9090
# Access at http://localhost:9090
# Loki
kubectl port-forward -n monitoring svc/loki-read 3100:3100
# Access at http://localhost:3100
# Alertmanager
kubectl port-forward -n monitoring svc/alertmanager 9093:9093
# Access at http://localhost:9093
Domain Structure¶
*.ops.kup6s.net- Infrastructure tools (ArgoCD, Grafana, Longhorn UI)*.sites.kup6s.com- Customer/project websites*.nodes.kup6s.com- Node-level reverse DNS only
Further Reading¶
Apply Infrastructure Changes How-To - Detailed OpenTofu workflow
Architecture Overview - System architecture
Monitoring Deployment - Monitoring stack documentation
GitLab BDA Deployment - GitLab BDA documentation