Reference

Extra Manifests

Complete reference for all infrastructure manifests deployed during KUP6S cluster creation.

Overview

Extra manifests are Kubernetes resources deployed automatically during cluster provisioning via Kustomize. These are infrastructure-tier components that must exist before applications can be deployed.

Location: kube-hetzner/extra-manifests/

Deployment: Applied by kube-hetzner during tofu apply via kustomization

Management: Declarative, version-controlled via OpenTofu

Deployment Groups and Semantic Organization

Manifests are organized into semantic groups (10, 20, 30, etc.) with letter suffixes (A, B, C) for ordering within each group. This provides clear functional grouping and makes dependencies explicit.

Grouping Schema: GROUP-LETTER-description.yaml.tpl

  • GROUP: Numeric group (10=Foundation, 20=Security, 30=Networking, etc.)

  • LETTER: Uppercase letter for ordering within group (A, B, C, D…)

  • description: Descriptive component name

Deployment Groups:

  • Group 10: Foundation (namespaces)

  • Group 20: Security & Secrets (cert-manager, ESO, application secrets)

  • Group 30: Networking (Traefik ingress, dashboards)

  • Group 40: Storage (SMB, Longhorn)

  • Group 50: Provisioning (Crossplane, S3 buckets)

  • Group 70: Observability (Prometheus, Grafana, Loki)

  • Group 80: GitOps (ArgoCD)

Manifest Inventory

Group

Manifest

Category

Purpose

10-A

namespaces.yaml.tpl

Foundation

Create namespaces for infrastructure components

20-A

cert-manager.yaml.tpl

Security

TLS certificate management (Let’s Encrypt)

20-B

external-secrets-operator.yaml.tpl

Security

External Secrets Operator for centralized secrets

20-C

application-secrets.yaml.tpl

Security

ClusterSecretStore for application secrets

30-A

traefik-basicauth.yaml.tpl

Networking

Basic auth middleware for infrastructure UIs

30-B

traefik-dashboard.yaml.tpl

Networking

Traefik dashboard ingress

40-A

csi-driver-smb.yaml.tpl

Storage

SMB/CIFS CSI driver (Hetzner Storage Box)

40-B

smb-storageclass.yaml.tpl

Storage

StorageClass for SMB volumes

40-C

longhorn-backup-secret.yaml.tpl

Storage

CIFS credentials for Longhorn backups

40-D

longhorn-recurring-backup.yaml.tpl

Storage

Recurring backup jobs for Longhorn

40-E

longhorn-ingress.yaml.tpl

Storage

Longhorn UI ingress

40-F

longhorn-storage-classes.yaml.tpl

Storage

Custom Longhorn storage classes

50-A

crossplane.yaml.tpl

Provisioning

Crossplane operator for infrastructure management

50-B

crossplane-provider-s3.yaml.tpl

Provisioning

AWS S3 provider (Hetzner Object Storage)

50-C

etcd-backup-bucket.yaml.tpl

Provisioning

S3 bucket for etcd backups (DR region)

50-D

loki-s3-bucket.yaml.tpl

Provisioning

S3 bucket for Loki logs (automatic deployment)

70-A

kube-prometheus-stack.yaml.tpl

Observability

Prometheus, Grafana, Loki monitoring stack

80-A

argocd.yaml.tpl

GitOps

ArgoCD for application deployments

Manifest Details

Group 10 - Foundation

10-A-namespaces.yaml.tpl

Purpose: Creates namespaces for infrastructure components.

Namespaces:

  • argocd - GitOps deployments

  • crossplane-system - Infrastructure provisioning

  • monitoring - Observability stack

Dependency: None (must be first)

Group 20 - Security & Secrets

20-A-cert-manager.yaml.tpl

Purpose: TLS certificate management with Let’s Encrypt.

Features:

  • Automatic certificate issuance

  • Renewal automation

  • ClusterIssuer for Let’s Encrypt (production)

Dependencies: None

Credentials: None required (ACME HTTP-01 challenge)

20-B-external-secrets-operator.yaml.tpl

Purpose: External Secrets Operator for centralized secret management.

Version: 0.20.4

Components:

  • Main operator (secret reconciliation)

  • Webhook (validation)

  • Cert-controller (TLS certificates)

Supported Backends:

  • Kubernetes (cross-cluster)

  • HashiCorp Vault

  • Cloud providers (AWS, Azure, GCP)

  • Password managers

Dependencies: 10-A-namespaces.yaml.tpl (external-secrets namespace)

See Also: Tutorial: Cross-Namespace Secrets

20-C-application-secrets.yaml.tpl

Purpose: ClusterSecretStore for application secrets management.

Features:

  • Centralized secret synchronization

  • Cross-namespace secret access

  • Integration with External Secrets Operator

Dependencies: 20-B-external-secrets-operator.yaml.tpl (ESO must be installed)

Group 30 - Networking

30-A-traefik-basicauth.yaml.tpl

Purpose: Basic authentication middleware for infrastructure dashboards.

Protected Services:

  • Traefik dashboard

  • Longhorn UI

  • ArgoCD (optional)

Credentials: From TF_VAR_traefik_basicauth_user and TF_VAR_traefik_basicauth_password

Security: Htpasswd format, stored in Kubernetes secret.

30-B-traefik-dashboard.yaml.tpl

Purpose: Ingress for Traefik dashboard.

URL: https://traefik.ops.kup6s.net

Authentication: Basic auth (traefik-tools-auth middleware)

Dependencies:

  • 20-A-cert-manager.yaml.tpl (TLS certificate)

  • 30-A-traefik-basicauth.yaml.tpl (authentication)

Group 40 - Storage

40-A-csi-driver-smb.yaml.tpl

Purpose: Deploys SMB/CIFS CSI driver for Hetzner Storage Box integration.

Use Case: Mount Hetzner Storage Boxes as persistent volumes.

Dependencies: None

Configuration: HelmChart CRD deploying csi-driver-smb from official repository.

40-B-smb-storageclass.yaml.tpl

Purpose: Defines StorageClass for SMB volumes.

Parameters:

  • Credentials: Pulled from Kubernetes secret

  • Protocol: SMB 3.0

  • Mount options: vers=3.0, dir_mode=0777, file_mode=0777

Dependencies: 40-A-csi-driver-smb.yaml.tpl

Usage: PVCs referencing smb storageClassName.

40-C-longhorn-backup-secret.yaml.tpl

Purpose: CIFS credentials for Longhorn backup to Hetzner Storage Box.

Credentials: From TF_VAR_longhorn_cifs_* variables

Important:

  • URL must include cifs:// protocol prefix

  • Backup target configuration is in kube.tf Helm values (not in this manifest)

Dependencies: Longhorn installed (deployed by kube-hetzner)

Note: Backup target is configured via Longhorn Helm values in kube.tf:

defaultSettings:
  backupTarget: "${var.longhorn_cifs_url}"
  backupTargetCredentialSecret: "cifs-secret"

40-D-longhorn-recurring-backup.yaml.tpl

Purpose: Defines recurring backup jobs for Longhorn volumes.

Schedule: Daily backups at configured time

Target: Hetzner Storage Box (CIFS)

Retention: Configurable per backup job

Dependencies: 40-C-longhorn-backup-secret.yaml.tpl (CIFS credentials)

40-E-longhorn-ingress.yaml.tpl

Purpose: Ingress for Longhorn UI.

URL: https://longhorn.ops.kup6s.net

Authentication: Basic auth (traefik-tools-auth middleware)

Dependencies:

  • 20-A-cert-manager.yaml.tpl (TLS)

  • 30-A-traefik-basicauth.yaml.tpl (auth)

  • Longhorn deployed

40-F-longhorn-storage-classes.yaml.tpl

Purpose: Custom Longhorn storage classes with different replica counts.

Storage Classes:

  1. longhorn-redundant-app (1 replica)

    • For apps with built-in replication (PostgreSQL, Redis)

    • Data locality: best-effort

  2. longhorn (2 replicas) - DEFAULT

    • General purpose storage

    • Data locality: best-effort

  3. longhorn-ha (3 replicas)

    • Mission-critical data

    • Data locality: disabled (max HA)

Dependencies: Longhorn installed

See Also: Storage Architecture

Group 50 - Provisioning (Crossplane)

50-A-crossplane.yaml.tpl

Purpose: Crossplane operator for infrastructure provisioning as Kubernetes resources.

Version: v2.0.2

Use Cases:

  • S3 bucket management

  • Future: Cloud resources as CRDs

Dependencies: 10-A-namespaces.yaml.tpl (crossplane-system namespace)

Resources:

  • CPU: 100m request, 500m limit

  • Memory: 256Mi request, 512Mi limit

50-B-crossplane-provider-s3.yaml.tpl

Purpose: AWS S3 provider for Crossplane (configured for Hetzner Object Storage).

Configuration:

  • Endpoint: Hetzner S3 (fsn1, hel1 regions)

  • Authentication: Shared Hetzner S3 credentials

  • Features: Bucket creation, deletion, lifecycle policies

Dependencies:

  • 50-A-crossplane.yaml.tpl (Crossplane operator)

  • Credentials: TF_VAR_hetzner_s3_access_key, TF_VAR_hetzner_s3_secret_key

Important: Includes services: [s3] to force custom endpoint usage.

50-C-etcd-backup-bucket.yaml.tpl

Purpose: Creates S3 bucket for etcd backups in disaster recovery region (hel1).

Bucket Name: backup-etcd-kup6s

Region: hel1 (Helsinki) - separate from production cluster region for DR

Deletion Policy: Orphan (bucket preserved if Crossplane resource deleted)

Dependencies:

  • 50-B-crossplane-provider-s3.yaml.tpl (provider must be ready)

  • Automatic deployment: CRD wait conditions ensure provider is ready

Deployment: Automatically deployed during cluster provisioning

50-D-loki-s3-bucket.yaml.tpl

Purpose: Creates S3 bucket for Loki log storage via Crossplane.

Bucket Name: logs-loki-kup6s

Region: fsn1 (Falkenstein) - same region as production cluster for lower latency

Features:

  • Versioning enabled

  • Lifecycle policy: Delete logs after 90 days

Dependencies:

  • 50-B-crossplane-provider-s3.yaml.tpl (provider must be ready)

  • Automatic deployment: CRD wait conditions ensure provider is ready

Deployment: Automatically deployed during cluster provisioning (timing issues resolved with CRD wait conditions)

Historical Note: Previously commented out and applied manually due to CRD timing issues. Now automatically deployed.

Group 70 - Observability

70-A-kube-prometheus-stack.yaml.tpl

Purpose: Complete monitoring stack with Prometheus, Grafana, and Loki.

Components:

  • Prometheus (metrics collection)

  • Grafana (visualization)

  • Loki (log aggregation)

  • Alertmanager (alert routing)

  • Alloy (log/metrics collection agent)

Configuration:

  • Loki storage: S3 (logs-loki-kup6s bucket)

  • Grafana: Ingress at grafana.ops.kup6s.net

  • Prometheus: ServiceMonitor auto-discovery

Dependencies:

  • 50-D-loki-s3-bucket.yaml.tpl (S3 storage for Loki)

  • Credentials: Hetzner S3 for Loki

Group 80 - GitOps

80-A-argocd.yaml.tpl

Purpose: ArgoCD for GitOps-based application deployments.

URL: https://argocd.ops.kup6s.net

Features:

  • Declarative application deployment

  • Git as source of truth

  • Automated sync and self-heal

  • Multi-source applications

Dependencies:

  • 10-A-namespaces.yaml.tpl (argocd namespace)

  • 20-A-cert-manager.yaml.tpl (TLS)

Grouped by Function

The semantic grouping provides natural functional categories:

Group 10 - Foundation

  • 10-A-namespaces.yaml.tpl

Group 20 - Security & Secrets

  • 20-A-cert-manager.yaml.tpl

  • 20-B-external-secrets-operator.yaml.tpl

  • 20-C-application-secrets.yaml.tpl

Group 30 - Networking

  • 30-A-traefik-basicauth.yaml.tpl

  • 30-B-traefik-dashboard.yaml.tpl

Group 40 - Storage

  • 40-A-csi-driver-smb.yaml.tpl

  • 40-B-smb-storageclass.yaml.tpl

  • 40-C-longhorn-backup-secret.yaml.tpl

  • 40-D-longhorn-recurring-backup.yaml.tpl

  • 40-E-longhorn-ingress.yaml.tpl

  • 40-F-longhorn-storage-classes.yaml.tpl

Group 50 - Provisioning

  • 50-A-crossplane.yaml.tpl

  • 50-B-crossplane-provider-s3.yaml.tpl

  • 50-C-etcd-backup-bucket.yaml.tpl

  • 50-D-loki-s3-bucket.yaml.tpl

Group 70 - Observability

  • 70-A-kube-prometheus-stack.yaml.tpl

Group 80 - GitOps

  • 80-A-argocd.yaml.tpl

Kustomization

Manifests are referenced in kube-hetzner/extra-manifests/kustomization.yaml.tpl:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  # Group 10 - Foundation
  - 10-A-namespaces.yaml

  # Group 20 - Security & Secrets
  - 20-A-cert-manager.yaml
  # External Secrets Operator MUST be deployed before ClusterSecretStore resources
  - 20-B-external-secrets-operator.yaml
  - 20-C-application-secrets.yaml

  # Group 30 - Networking
  - 30-A-traefik-basicauth.yaml
  - 30-B-traefik-dashboard.yaml

  # Group 40 - Storage
  - 40-A-csi-driver-smb.yaml
  - 40-B-smb-storageclass.yaml
  # CIFS secret for Longhorn backup (backup target configured via Helm values in kube.tf)
  - 40-C-longhorn-backup-secret.yaml
  - 40-D-longhorn-recurring-backup.yaml
  - 40-E-longhorn-ingress.yaml
  - 40-F-longhorn-storage-classes.yaml

  # Group 50 - Provisioning (Crossplane)
  - 50-A-crossplane.yaml
  - 50-B-crossplane-provider-s3.yaml
  - 50-C-etcd-backup-bucket.yaml
  - 50-D-loki-s3-bucket.yaml

  # Group 70 - Observability
  - 70-A-kube-prometheus-stack.yaml

  # Group 80 - GitOps
  - 80-A-argocd.yaml

Adding New Manifests

Semantic Grouping Convention

Choose the appropriate group and next available letter:

Group Numbers:

  • 10: Foundation (namespaces, basic infrastructure)

  • 20: Security & Secrets (cert-manager, ESO, secret stores)

  • 30: Networking (Traefik, ingress controllers)

  • 40: Storage (CSI drivers, Longhorn, storage classes)

  • 50: Provisioning (Crossplane, infrastructure-as-code operators)

  • 60: Databases (PostgreSQL, backup tools)

  • 70: Observability (Prometheus, Grafana, Loki, monitoring)

  • 80: GitOps (ArgoCD, Flux)

  • 90+: Reserved for future categories

Letter Suffixes: A, B, C, D… (order within group)

Naming Pattern: {GROUP}-{LETTER}-{description}.yaml.tpl

Examples:

  • Storage CSI driver: 40-C-driver-name.yaml.tpl

  • Monitoring tool: 70-B-tool-name.yaml.tpl

  • New namespace: 10-B-new-namespace.yaml.tpl

Creating a New Manifest

  1. Choose group: Determine functional category (10, 20, 30, etc.)

  2. Choose letter: Find next available letter in that group

  3. Create file: kube-hetzner/extra-manifests/{GROUP}-{LETTER}-component.yaml.tpl

  4. Add to kustomization: Edit kustomization.yaml.tpl in appropriate group section

  5. Template variables: Use ${variable} for credentials

  6. Test: Run tofu plan to validate

  7. Apply: Run bash scripts/apply-and-configure-longhorn.sh (mandatory for infrastructure changes)

Important: Always use the apply script, not direct tofu apply, to ensure proper infrastructure configuration.

HelmChart CRD Pattern

Most manifests use K3S’s HelmChart CRD:

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: my-component
  namespace: my-namespace
spec:
  repo: https://charts.example.com
  chart: my-component
  version: 1.0.0
  targetNamespace: my-namespace
  valuesContent: |-
    # Helm values here

Benefits:

  • Declarative Helm deployments

  • Managed by OpenTofu

  • Version controlled

Dependencies and Order

Critical Ordering

The semantic grouping naturally enforces dependencies:

  1. Group 10 before all others - Namespaces required by all namespaced resources

  2. Group 20 before 30 - Security (ESO, cert-manager) before networking (ingresses)

  3. Group 50 before 70 - Crossplane/S3 buckets before monitoring (Loki needs S3)

  4. Within Group 20: ESO operator (20-B) before ClusterSecretStore (20-C)

  5. Within Group 50: Crossplane (50-A) → Provider (50-B) → Buckets (50-C, 50-D)

Timing Safeguards

Automatic CRD Wait Conditions: The kube-hetzner module now includes wait conditions in kustomization_user.tf to ensure:

  • Longhorn CRDs are registered before RecurringJob resources

  • External Secrets Operator webhook is ready before ClusterSecretStore resources

  • Crossplane S3 Provider CRDs are registered before Bucket resources

Result: All manifests, including S3 buckets, deploy automatically without manual intervention.

Historical Note: Previously, 50-D-loki-s3-bucket.yaml was commented out and required manual application after cluster creation due to CRD timing issues. This is now resolved with wait conditions.

Safe to Reorder

Within the same group, manifests can generally be reordered:

  • Group 40 storage components (letters C-F) are independent

  • Groups 70 and 80 are independent of each other

Exception: Always respect letter ordering when one component depends on another within the same group.

Troubleshooting

Manifest Not Applied

# Check kustomization includes it
cat kube-hetzner/extra-manifests/kustomization.yaml | grep my-manifest

# Check OpenTofu applied it (use bash with dotenv)
bash -c "cd kube-hetzner && dotenv .env && tofu show | grep my-manifest"

# Manually apply if needed
kubectl apply -f kube-hetzner/extra-manifests/GG-L-my-manifest.yaml

CRD Timing Issues

Should not occur with current setup - wait conditions handle CRD timing automatically.

If you still see “CRD not found” errors despite wait conditions:

  1. Check wait condition logs in kube-hetzner terraform output

  2. Verify the operator pod is running:

    kubectl get pods -n crossplane-system  # For Crossplane
    kubectl get pods -n external-secrets   # For ESO
    kubectl get pods -n longhorn-system    # For Longhorn
    
  3. If CRDs still missing, operator may have failed to install

Credential Issues

Verify environment variables are loaded:

echo $TF_VAR_hcloud_token  # Should show token

Template variables not replaced = forgot to source .env before running tofu

S3 Bucket Not Created

If Loki or etcd backups fail with “NoSuchBucket” errors:

  1. Check bucket exists:

    kubectl get buckets.s3.aws.upbound.io -A
    
  2. Check Crossplane provider:

    kubectl get provider -A
    kubectl get providerconfig -A
    
  3. If bucket missing, re-apply infrastructure:

    cd kube-hetzner
    bash scripts/apply-and-configure-longhorn.sh
    

Further Reading