Helm Deployment Strategies: K3s HelmChart vs cdk8s Helm

Problem Statement

When using cdk8s to generate Kubernetes manifests that include Helm charts, there are two fundamentally different approaches:

  1. cdk8s Helm (import { Helm } from 'cdk8s') - Render charts at build time

  2. K3s HelmChart CRD (ApiObject with kind: HelmChart) - Deploy charts at runtime

This document explains why we chose the K3s HelmChart CRD approach and why it must remain as ApiObject instead of being migrated to type-safe imports.

TL;DR

Decision: Use K3s HelmChart CRD (as ApiObject) for all Helm chart deployments.

Rationale: Clean manifests, readable git history, and proper Helm lifecycle management outweigh the loss of type safety for this specific resource type.

The Two Approaches

Approach 1: cdk8s Helm (Build-Time Rendering)

import { Helm } from 'cdk8s';

new Helm(this, 'prometheus', {
  chart: 'prometheus-community/kube-prometheus-stack',
  version: '67.6.1',
  values: {
    prometheus: {
      replicas: 2,
    },
  },
});

What happens:

  1. During cdk8s synth, cdk8s runs helm template

  2. Chart is expanded into raw Kubernetes manifests

  3. All rendered manifests are included in output YAML

  4. ArgoCD syncs thousands of individual resources

Example output size:

# Single chart expanded
$ wc -l manifests/monitoring.k8s.yaml
5247 manifests/monitoring.k8s.yaml

# With 5 charts (Prometheus, Loki, Alloy, CNPG operator, Barman plugin)
$ wc -l manifests/*.k8s.yaml
25000+ total lines

Git diff for version bump:

# 5000+ lines of changes for a simple version bump
- version: 67.6.1
+ version: 67.6.2

# Plus hundreds of changes from:
- ConfigMap data hashes changed
- Secret resource versions changed
- Generated names changed (random suffixes)
- Image digests updated

Approach 2: K3s HelmChart CRD (Runtime Deployment)

import { ApiObject } from 'cdk8s';

new ApiObject(this, 'prometheus', {
  apiVersion: 'helm.cattle.io/v1',
  kind: 'HelmChart',
  metadata: {
    name: 'kube-prometheus-stack',
    namespace: 'monitoring',
  },
  spec: {
    repo: 'https://prometheus-community.github.io/helm-charts',
    chart: 'kube-prometheus-stack',
    version: '67.6.1',
    targetNamespace: 'monitoring',
    valuesContent: `
      prometheus:
        replicas: 2
    `,
  },
});

What happens:

  1. cdk8s synth outputs a HelmChart resource (50-100 lines)

  2. ArgoCD syncs the HelmChart resource to cluster

  3. K3s Helm controller downloads and installs the chart

  4. Helm release is managed by K3s at runtime

Example output size:

# Single chart as HelmChart resource
$ grep -A 50 "kind: HelmChart" manifests/monitoring.k8s.yaml | wc -l
87 lines per chart

# With 5 charts plus other resources
$ wc -l manifests/monitoring.k8s.yaml
847 manifests/monitoring.k8s.yaml

Git diff for version bump:

# Clean, readable 1-line change
- version: 67.6.1
+ version: 67.6.2

Comparison Matrix

Aspect

cdk8s Helm

K3s HelmChart CRD

Manifest Size

5000+ lines per chart

50-100 lines per chart

Total Size (5 charts)

~25,000 lines

~850 lines

Git History

Unreadable (massive diffs)

Clean (1-line changes)

Code Review

Impossible (5000 line diffs)

Easy (focused changes)

Type Safety

✅ Yes (cdk8s API)

❌ No (ApiObject)

Helm Features

❌ Lost (no releases)

✅ Full support

Helm CLI

helm list doesn’t work

✅ All commands work

Rollback

❌ Manual (kubectl apply old manifest)

helm rollback

Chart Updates

Re-synth + commit 5000 lines

Change version field

ArgoCD Sync

Slow (1000s of resources)

Fast (1 resource)

Build Time

Slow (helm template for each chart)

Fast (no rendering)

Dependencies

Requires Helm CLI installed

No local dependencies

Decision Rationale

Why K3s HelmChart CRD?

1. Manifest Clarity (Primary Reason)

Generated manifests remain human-readable and maintainable. With 5 Helm charts, the difference is:

  • cdk8s Helm: 25,000 lines of generated YAML

  • HelmChart CRD: 850 lines total

2. Git History Quality

Git history shows what changed instead of what Helm regenerated:

# cdk8s Helm version bump
$ git log --oneline
a1b2c3d Update Prometheus version (5000 files changed)

$ git diff HEAD~1
... 5000 lines of unreadable ConfigMap/Secret changes ...

# HelmChart CRD version bump
$ git log --oneline
a1b2c3d Update Prometheus to v67.6.2

$ git diff HEAD~1
- version: 67.6.1
+ version: 67.6.2

3. Helm Lifecycle Management

K3s HelmChart preserves full Helm functionality:

# Works with HelmChart CRD
$ helm list -A
NAME                    NAMESPACE   STATUS      CHART
kube-prometheus-stack   monitoring  deployed    kube-prometheus-stack-67.6.1

$ helm rollback kube-prometheus-stack -n monitoring
Rollback was a success!

# Doesn't work with cdk8s Helm (no Helm releases)
$ helm list -A
# (empty - resources managed directly via kubectl)

4. ArgoCD Sync Performance

  • cdk8s Helm: ArgoCD syncs 1000+ individual resources (ConfigMaps, Secrets, Deployments, Services, etc.)

  • HelmChart CRD: ArgoCD syncs 1 HelmChart resource, K3s handles the rest

5. Developer Experience

Chart version updates are simple, visible changes:

// Clear, intentional change
spec: {
  chart: 'kube-prometheus-stack',
- version: '67.6.1',
+ version: '67.6.2',
}

// vs thousands of lines of regenerated YAML

Why Keep as ApiObject?

Even though we migrate other CRDs to type-safe imports, HelmChart stays as ApiObject because:

1. CRD Not Published Conventionally

K3s HelmChart CRD is:

  • Embedded in K3s binary (not distributed separately)

  • No versioned GitHub releases (helmchart-crd-v1.0.0.yaml)

  • Defined in Go structs, not YAML manifests

  • Part of K3s internals, not a standalone project

We could extract it:

kubectl get crd helmcharts.helm.cattle.io -o yaml > helmchart-crd.yaml

But this creates problems:

  • CRD tied to specific K3s version

  • Must re-extract after K3s upgrades

  • No upstream versioning or changelog

  • Maintenance burden for minimal benefit

2. Simple, Stable Spec

HelmChart spec is straightforward and rarely changes:

spec: {
  repo: string;          // Helm repo URL
  chart: string;         // Chart name
  version?: string;      // Chart version
  targetNamespace: string;
  valuesContent?: string; // YAML values
}

No complex enums, no strict validation, no type confusion. Type safety adds minimal value.

3. Low Change Frequency

Once deployed, HelmChart resources change infrequently:

  • Initial deployment: Set up chart reference

  • Occasional updates: Version bump or values change

  • Rare modifications: Repo change

Not worth importing CRD for 5 rarely-changed resources.

4. K3s-Specific (Not Portable)

HelmChart only works in K3s clusters. Other distributions use:

  • Flux (HelmRelease CRD)

  • ArgoCD (Application with Helm source)

  • Helm Operator (various implementations)

Importing a K3s-specific CRD doesn’t improve portability.

Real-World Example

Prometheus Stack Deployment

With cdk8s Helm:

# manifests/monitoring.k8s.yaml (5247 lines)
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus-kube-prometheus-prometheus
  namespace: monitoring
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-kube-prometheus-prometheus-rulefiles-0
  namespace: monitoring
data:
  monitoring-prometheus-kube-prometheus-alertmanager.rules.yaml: |
    groups:
    - name: alertmanager.rules
      rules:
      - alert: AlertmanagerConfigInconsistent
        annotations:
          description: The configuration of the instances of the...
# ... 5000+ more lines ...

With K3s HelmChart CRD:

# manifests/monitoring.k8s.yaml (87 lines for this chart)
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: kube-prometheus-stack
  namespace: monitoring
  annotations:
    argocd.argoproj.io/sync-wave: "2"
spec:
  repo: https://prometheus-community.github.io/helm-charts
  chart: kube-prometheus-stack
  version: 67.6.1
  targetNamespace: monitoring
  valuesContent: |
    prometheus:
      prometheusSpec:
        replicas: 2
        retention: 3d
        storageSpec:
          volumeClaimTemplate:
            spec:
              storageClassName: longhorn
              resources:
                requests:
                  storage: 3Gi
    grafana:
      ingress:
        enabled: true
        hosts:
          - grafana.ops.kup6s.net
# ... 50 more lines of values ...

Implementation in This Codebase

Current Usage (5 HelmChart resources)

  1. CNPG Operator (cnpg/charts/constructs/operator.ts)

    new ApiObject(this, 'cnpg-operator', {
      apiVersion: 'helm.cattle.io/v1',
      kind: 'HelmChart',
      spec: {
        repo: 'https://cloudnative-pg.github.io/charts',
        chart: 'cloudnative-pg',
        version: '0.23.4',
      },
    });
    
  2. Barman Cloud Plugin (cnpg/charts/constructs/barman-plugin.ts)

  3. Prometheus Stack (monitoring/charts/constructs/prometheus-stack.ts)

  4. Loki (monitoring/charts/constructs/loki.ts)

  5. Alloy (monitoring/charts/constructs/alloy.ts)

Migration Status

During the CRD type-safety migration project (Phases 1-3), HelmChart was intentionally kept as ApiObject:

  • Phase 1: ServiceMonitor, PodMonitor → Type-safe

  • Phase 2: ExternalSecret → Type-safe

  • Phase 3: CNPG Pooler → Type-safe

  • HelmChart: Kept as ApiObject (architectural decision)

Rationale documented: CRD Migration Plan

Alternatives Considered

Alternative 1: Import HelmChart CRD from Cluster

Approach:

kubectl get crd helmcharts.helm.cattle.io -o yaml > \
  dp-infra/monitoring/crds/helmchart.yaml

Rejected because:

  • CRD tied to cluster’s K3s version

  • Must re-extract after every K3s upgrade

  • No upstream versioning

  • Maintenance burden for 5 resources

  • Type safety benefit minimal for simple spec

Alternative 2: Fork K3s and Publish CRD

Approach: Maintain our own versioned HelmChart CRD releases

Rejected because:

  • Overkill for 5 resources

  • Becomes stale vs upstream K3s

  • Additional repository to maintain

  • No community benefit (K3s-specific)

Alternative 3: Switch to cdk8s Helm

Approach: Use import { Helm } from 'cdk8s' for all charts

Rejected because:

  • 25,000+ line manifests (unmaintainable)

  • Unreadable git history

  • Loss of Helm functionality

  • Slower ArgoCD syncs

  • See comparison matrix above for full reasons

Guidelines for Developers

When to use K3s HelmChart:

Use HelmChart CRD when:

  • Deploying complex Helm charts (Prometheus, Loki, CNPG, etc.)

  • Chart has many resources (100+ manifests)

  • Chart changes frequently (version bumps common)

  • You need Helm rollback capability

  • You want clean git history

When to use cdk8s Helm:

Avoid cdk8s Helm unless:

  • Not using K3s (other Kubernetes distributions)

  • Chart is tiny (<10 resources)

  • Chart is completely static (never updated)

  • You need to modify chart resources programmatically

Recommendation: In K3s environments, always prefer HelmChart CRD.

How to deploy a new Helm chart:

  1. Create construct in appropriate directory:

    // dp-infra/monitoring/charts/constructs/my-chart.ts
    import { Construct } from 'constructs';
    import { ApiObject } from 'cdk8s';
    
    export class MyChartConstruct extends Construct {
      constructor(scope: Construct, id: string) {
        super(scope, id);
    
        new ApiObject(this, 'my-chart', {
          apiVersion: 'helm.cattle.io/v1',
          kind: 'HelmChart',
          metadata: {
            name: 'my-chart',
            namespace: 'my-namespace',
          },
          spec: {
            repo: 'https://charts.example.com',
            chart: 'my-chart',
            version: '1.0.0',
            targetNamespace: 'my-namespace',
            valuesContent: `
              key: value
            `,
          },
        });
      }
    }
    
  2. Build and verify:

    npm run build
    git diff  # Should show clean, small changes
    
  3. Commit and deploy:

    git add .
    git commit -m "Add my-chart Helm deployment"
    git push
    # ArgoCD syncs automatically
    

References