How-To Guide
Deploy Multi-Architecture Applications¶
Target Audience: Developers deploying applications to the multi-architecture cluster
This guide explains how to deploy applications on the kup6s cluster, which supports both ARM64 (primary) and AMD64 (legacy) architectures.
Understanding the Cluster Architecture¶
The cluster has two types of worker nodes:
ARM64 Nodes (Primary - Recommended)¶
3 worker nodes: CAX31 (8 vCPU, 16GB) + 2x CAX21 (4 vCPU, 8GB)
1 database node: CAX21 (4 vCPU, 8GB) - dedicated for PostgreSQL
Untainted: Workloads schedule here by default
Cost: 2-3x cheaper than AMD64 for same resources
Performance: Better multi-core performance
AMD64 Nodes (Legacy - Transition)¶
2 worker nodes: CPX31 (4 vCPU, 8GB) + CPX21 (3 vCPU, 4GB)
Tainted:
kubernetes.io/arch=amd64:NoScheduleUse only for: Legacy applications without ARM64 images
Cost: More expensive than ARM64
Migration goal: Move workloads to ARM64 over time
Decision Tree: Which Architecture?¶
Do you have an ARM64 image available?
├─ YES → Deploy to ARM64 (default, no special config)
└─ NO
├─ Can you rebuild for ARM64? → YES → Rebuild and deploy to ARM64
└─ NO (proprietary/legacy)
├─ Can you find ARM64 alternative? → YES → Use alternative on ARM64
└─ NO → Deploy to AMD64 (requires nodeSelector + toleration)
Deployment Patterns¶
Pattern 1: ARM64 Application (Recommended)¶
When to use: Your image supports ARM64 or is multi-arch
No special configuration needed - workloads naturally schedule to ARM64 nodes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: modern-webapp
namespace: websites
spec:
replicas: 3
selector:
matchLabels:
app: modern-webapp
template:
metadata:
labels:
app: modern-webapp
spec:
# No nodeSelector or tolerations needed!
containers:
- name: web
image: myorg/webapp:latest # ARM64 or multi-arch image
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
Verification:
# Check where pods are running
kubectl get pods -n websites -o wide
# Should show ARM64 node names: kup6s-agent-cax* (fsn1 or nbg1)
Pattern 2: AMD64 Application (Legacy Only)¶
When to use: Application ONLY has AMD64 images (legacy, proprietary)
Requires explicit targeting with nodeSelector and toleration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: legacy-webapp
namespace: websites
spec:
replicas: 2
selector:
matchLabels:
app: legacy-webapp
template:
metadata:
labels:
app: legacy-webapp
spec:
# Target AMD64 nodes explicitly
nodeSelector:
kubernetes.io/arch: amd64
# Tolerate AMD64 node taint
tolerations:
- key: kubernetes.io/arch
operator: Equal
value: amd64
effect: NoSchedule
containers:
- name: web
image: myorg/legacy-webapp:amd64-only # AMD64-only image
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
Verification:
# Check where pods are running
kubectl get pods -n websites -o wide
# Should show AMD64 node name: kup6s-agent-cpx21-fsn1
Pattern 3: Multi-Arch with ARM64 Preference¶
When to use: Image supports both, but prefer ARM64 for cost savings
Soft preference - schedules to ARM64 if available, falls back to AMD64:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flexible-webapp
namespace: websites
spec:
replicas: 5
selector:
matchLabels:
app: flexible-webapp
template:
metadata:
labels:
app: flexible-webapp
spec:
# Prefer ARM64, allow AMD64 as fallback
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64 # Prefer ARM64 (cheaper)
- weight: 50
preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64 # Fallback to AMD64 if ARM64 full
# Tolerate both taints
tolerations:
- key: kubernetes.io/arch
operator: Exists
effect: NoSchedule
containers:
- name: web
image: myorg/flexible-webapp:latest # Multi-arch manifest
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
Pattern 4: PostgreSQL on Dedicated Database Node¶
When to use: Production databases requiring isolation
Targets dedicated ARM64 database node:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: production-db
namespace: databases
spec:
instances: 3 # HA with replication
# Schedule to dedicated database node
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: workload
operator: In
values:
- database
# Tolerate database node taint
tolerations:
- key: workload
operator: Equal
value: database
effect: NoSchedule
storage:
storageClass: longhorn
size: 20Gi
postgresql:
parameters:
max_connections: "200"
shared_buffers: "512MB"
effective_cache_size: "1536MB"
backup:
barmanObjectStore:
destinationPath: s3://kup6s-db-backups/production-db
s3Credentials:
accessKeyId:
name: db-backup-s3
key: ACCESS_KEY_ID
secretAccessKey:
name: db-backup-s3
key: SECRET_ACCESS_KEY
wal:
compression: gzip
Verification:
# Check database pod placement
kubectl get pods -n databases -o wide
# Will schedule on any ARM64 node (cax31 or cax21 in fsn1/nbg1)
Building Multi-Architecture Images¶
Using Docker Buildx¶
Build for both architectures:
# Create buildx builder (first time only)
docker buildx create --name multiarch-builder --use
# Build and push multi-arch image
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myorg/myapp:latest \
--push \
.
Verify multi-arch manifest:
docker manifest inspect myorg/myapp:latest | grep architecture
# Should show both:
# "architecture": "amd64"
# "architecture": "arm64"
Dockerfile Best Practices¶
Architecture-agnostic Dockerfile:
# Use multi-arch base images
FROM node:20-alpine
# Install dependencies
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Copy application
COPY . .
# Run application
EXPOSE 8080
CMD ["node", "server.js"]
Architecture-specific builds (if needed):
FROM --platform=$BUILDPLATFORM node:20-alpine AS builder
ARG TARGETPLATFORM
ARG BUILDPLATFORM
# Build steps here...
FROM node:20-alpine
# Copy from builder...
Migration Strategy: AMD64 → ARM64¶
Step-by-Step Migration Process¶
For each AMD64-only application:
1. Verify ARM64 Image Availability¶
# Check if image has ARM64 support
docker manifest inspect myorg/myapp:latest
# Look for "architecture": "arm64"
If NO ARM64 image:
Rebuild using Docker Buildx (see above)
OR find ARM64-compatible alternative
OR keep on AMD64 temporarily
2. Create ARM64 Deployment¶
Deploy new version on ARM64 (keep AMD64 running):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-arm64 # Different name during testing
namespace: websites
spec:
replicas: 2
selector:
matchLabels:
app: myapp
arch: arm64
template:
metadata:
labels:
app: myapp
arch: arm64
spec:
containers:
- name: web
image: myorg/myapp:latest-arm64
# ... same config as AMD64 version
3. Test on ARM64¶
Create separate Ingress for testing:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-arm64-test
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: traefik
tls:
- hosts:
- myapp-arm-test.sites.kup6s.com
secretName: myapp-arm-test-tls
rules:
- host: myapp-arm-test.sites.kup6s.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-arm64
port:
number: 80
Test thoroughly:
# Load test
ab -n 10000 -c 100 https://myapp-arm-test.sites.kup6s.com/
# Functional tests
curl https://myapp-arm-test.sites.kup6s.com/health
4. Switch Production Traffic¶
Update main Ingress to point to ARM64 service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-prod
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: traefik
tls:
- hosts:
- myapp.sites.kup6s.com
secretName: myapp-prod-tls
rules:
- host: myapp.sites.kup6s.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-arm64 # Changed from AMD64 service
port:
number: 80
5. Monitor and Validate¶
Monitor for 1-2 weeks:
# Watch resource usage
kubectl top pods -n websites -l app=myapp
# Check logs for errors
kubectl logs -n websites -l app=myapp --tail=100
# Query Loki via Grafana
{namespace="websites", app="myapp"}
6. Decommission AMD64 Version¶
After stable operation:
# Delete AMD64 deployment
kubectl delete deployment myapp-amd64 -n websites
# Remove test Ingress
kubectl delete ingress myapp-arm64-test -n websites
Common Issues & Troubleshooting¶
Issue: Pods Pending on “Insufficient cpu/memory”¶
Cause: AMD64 nodes have less capacity (7 vCPU, 12GB vs ARM64 20 vCPU, 40GB)
Solution:
# Check node resource usage
kubectl top nodes
# If AMD64 nodes full, migrate workloads to ARM64
# Or reduce resource requests for AMD64 deployments
Issue: Pods Stuck in “Pending” with MatchNodeSelector¶
Cause: AMD64-only image deployed without nodeSelector/toleration
Solution: Add proper nodeSelector and toleration (see Pattern 2 above)
# Check why pending
kubectl describe pod <pod-name> -n <namespace>
# Look for: "0/9 nodes are available: 2 node(s) had untolerated taint"
Issue: Image Pull Error on ARM64¶
Cause: Image doesn’t have ARM64 variant
Solution:
# Verify image architecture support
docker manifest inspect <image-name>
# If no ARM64 variant, either:
# 1. Rebuild image for ARM64
# 2. Deploy to AMD64 nodes instead
Issue: Performance Difference Between Architectures¶
Cause: ARM64 and AMD64 have different performance characteristics
Solution:
ARM64: Better multi-core, lower single-thread
Adjust resource requests/limits per architecture
Profile and optimize for target architecture
Resource Planning¶
Current Cluster Capacity¶
Architecture |
vCPU |
RAM |
Cost/Month |
Use For |
|---|---|---|---|---|
ARM64 Workers |
20 |
40GB |
€25.47 |
75% of workloads |
AMD64 Workers |
7 |
12GB |
€25.98 |
25% of workloads |
ARM64 Database |
4 |
8GB |
€6.49 |
PostgreSQL only |
Total |
31 |
60GB |
€57.94 |
Recommended Allocation¶
New applications: Deploy to ARM64 by default
Legacy apps: Keep on AMD64 temporarily, plan migration
Databases: Use dedicated database node (ARM64)
Cost optimization: Migrate 1-2 apps from AMD64→ARM64 per month
Long-Term Goal (6 months)¶
Remove AMD64 nodes once all workloads migrated:
Cost: €57.94/month → €31.96/month (-45%)
Capacity: 31 vCPU → 24 vCPU ARM64 (still sufficient)
Complexity: Multi-arch → Single-arch (simpler operations)
Reference: Node Labels and Taints¶
ARM64 Worker Nodes¶
labels:
kubernetes.io/arch: arm64
# No taints - default scheduling target
AMD64 Worker Nodes¶
labels:
kubernetes.io/arch: amd64
taints:
- key: kubernetes.io/arch
value: amd64
effect: NoSchedule
Database Node (ARM64)¶
labels:
kubernetes.io/arch: arm64
workload: database
taints:
- key: workload
value: database
effect: NoSchedule
Getting Help¶
Architecture questions: See Cluster Capabilities Reference
Image building: Docker Buildx documentation
Migration planning: Contact cluster admin team
Performance tuning: Check Grafana dashboards