How to Migrate from Docker Swarm to Kubernetes

This guide documents the migration process from Docker Swarm to Kubernetes, based on the successful nextcloudaffenstall migration completed in January 2026.

Prerequisites

  • Docker Swarm instance running Nextcloud

  • Access to source host (SSH)

  • Kubernetes cluster deployed and accessible

  • ArgoCD configured

  • S3 object storage credentials

  • pgloader installed for database migration

Migration Overview

Phase

Component

Downtime

Duration

1

Deploy K8s infrastructure

No

5-10 min

2

Enable maintenance mode

Yes

< 1 min

3

Migrate database

Yes

10-30 min

4

Migrate files to S3

No

1-3 hours

5

Configure K8s Nextcloud

Yes

10-20 min

6

DNS cutover

Yes

5-10 min

Total Downtime: ~45-60 minutes (phases 2, 3, 5, 6)

Phase 1: Deploy K8s Infrastructure

1.1 Prepare Configuration

Create config.yaml for the new instance:

namespace: nextcloudaffenstall
domain: affenstall.cloud

versions:
  nextcloud: "31.0.13"  # Match Swarm version
  postgres: "16"
  redis: "7.4"
  collabora: "25.04.8.2.1"
  whiteboard: "latest"

s3:
  endpoint: https://fsn1.your-objectstorage.com
  region: fsn1
  buckets:
    data: data-nextcloudaffenstall-kup6s
    backups: backups-nextcloudaffenstall-kup6s

storage:
  storageClass: longhorn
  postgresSize: 10Gi
  redisSize: 5Gi

resources:
  nextcloud:
    requests:
      cpu: 200m
      memory: 512Mi
    limits:
      cpu: 1000m
      memory: 2Gi
  postgres:
    requests:
      cpu: 100m
      memory: 256Mi
    limits:
      cpu: 500m
      memory: 512Mi
  redis:
    requests:
      cpu: 50m
      memory: 128Mi
    limits:
      cpu: 200m
      memory: 256Mi

replicas:
  nextcloud: 1  # RWO volume limitation
  postgres: 2
  redis: 1

collabora:
  enabled: true
  domain: collabora.affenstall.cloud
  resources:
    requests:
      cpu: 100m
      memory: 512Mi
    limits:
      cpu: 1000m
      memory: 1Gi

whiteboard:
  enabled: true
  resources:
    requests:
      cpu: 50m
      memory: 256Mi
    limits:
      cpu: 500m
      memory: 512Mi

1.2 Generate Manifests

cd dp-kup/internal/nextcloud/nextcloudaffenstall
npm install
npm run synth

Output: manifests/nextcloudaffenstall.k8s.yaml

1.3 Create ArgoCD Application

cd argoapps
# Generate ArgoCD app definition
npm run synth
kubectl apply -f dist/nextcloud-nextcloudaffenstall.k8s.yaml

1.4 Monitor Deployment

# Watch ArgoCD sync
kubectl get application -n argocd nextcloud-nextcloudaffenstall -w

# Check pod status
kubectl get pods -n nextcloudaffenstall -w

Expected deployment order (sync waves):

  1. Wave 0: Namespace created

  2. Wave 1: S3 buckets (Crossplane) + credentials (ESO) - ~60 seconds

  3. Wave 2: PostgreSQL (CNPG) + Redis - ~3-5 minutes

  4. Wave 3: PgBouncer pooler

  5. Wave 4: Nextcloud pods (will fail initially - expected, needs database migration)

Phase 2: Prepare Swarm Instance

2.1 Enable Maintenance Mode

# SSH to Swarm host
ssh groot

# Enable maintenance mode
docker exec nextcloudaffenstall_app php occ maintenance:mode --on

User Experience: Users see “Nextcloud is in maintenance mode” message.

2.2 Export Database

# Check database type
docker exec nextcloudaffenstall_db mysql --version
# → mysql  Ver 15.1 Distrib 10.x.x-MariaDB

# Export database
docker exec nextcloudaffenstall_db \
  mysqldump -u nextcloud -p nextcloud > /tmp/nextcloud_backup.sql
# Enter password when prompted

2.3 Check Data Size

# User data directory
du -sh /data/nextcloudaffenstall/data
# → 13G

# Database size
ls -lh /tmp/nextcloud_backup.sql
# → 145M

Phase 3: Database Migration

3.1 Setup Temporary MariaDB Container

On your local machine or migration host:

# Create temporary MariaDB container
docker run -d \
  --name nextcloud-migrate-db \
  -e MYSQL_ROOT_PASSWORD=temppassword \
  -e MYSQL_DATABASE=nextcloud \
  -e MYSQL_USER=nextcloud \
  -e MYSQL_PASSWORD=temppassword \
  -p 3307:3306 \
  mariadb:10

# Wait for MariaDB to start
sleep 30

# Import dump
docker exec -i nextcloud-migrate-db mysql -u nextcloud -ptemppassword nextcloud < /tmp/nextcloud_backup.sql

3.2 Get PostgreSQL Credentials

# Get PostgreSQL password from K8s
kubectl get secret -n nextcloudaffenstall nextcloud-postgres-app \
  -o jsonpath='{.data.password}' | base64 -d
# → MD4gOcblsV0Qhuc2FaMvoCLb

3.3 Port-Forward to PostgreSQL

# In separate terminal
kubectl port-forward -n nextcloudaffenstall svc/nextcloud-postgres-pooler 5432:5432

3.4 Run pgloader

# Install pgloader (if not already installed)
# Ubuntu/Debian:
apt install pgloader

# Run migration
pgloader \
  mysql://nextcloud:temppassword@localhost:3307/nextcloud \
  postgresql://nextcloud:MD4gOcblsV0Qhuc2FaMvoCLb@localhost:5432/nextcloud

Output:

                    table name     errors       rows      bytes      total time
---------------------------------------  ---------  ---------  ---------  --------------
                  fetch meta data          0          0                     0.234s
                   Create Schemas          0          0                     0.001s
                 Create SQL Types          0          0                     0.005s
                    Create tables          0         74                     0.125s
...
---------------------------------------  ---------  ---------  ---------  --------------
        COPY Threads Completion          0          4                     8.349s
                 Create Indexes          0         89                    23.567s
         Index Build Completion          0         89                     7.893s
                Reset Sequences          0         29                     0.098s
                   Primary Keys          0         74                     0.156s
            Create Foreign Keys          0          0                     0.000s
                Create Triggers          0          0                     0.000s
               Install Comments          0          0                     0.000s
---------------------------------------  ---------  ---------  ---------  --------------
              Total import time          ✓      17596     33.5 MB         40.423s

3.5 Verify Migration

# Check row counts in key tables
kubectl exec -n nextcloudaffenstall nextcloud-postgres-1 -- \
  psql -U postgres -d nextcloud -c "
    SELECT 'users' AS table_name, COUNT(*) FROM oc_users
    UNION ALL
    SELECT 'files', COUNT(*) FROM oc_filecache
    UNION ALL
    SELECT 'shares', COUNT(*) FROM oc_share
  ;"

#  table_name | count
# ------------+-------
#  users      |     7
#  files      | 17596
#  shares     |    42

Phase 4: Migrate Files to S3

4.1 Get S3 Credentials

# From K8s secret
kubectl get secret -n nextcloudaffenstall nextcloud-s3-credentials \
  -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d
# → ZEGZ9O010SIDOGCINN0N

kubectl get secret -n nextcloudaffenstall nextcloud-s3-credentials \
  -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d
# → QeUvzv5IsGuWqEgghsa4e8Eo4pcOrMkGwR6laOv5

4.2 Install AWS CLI (on Swarm host)

# Ubuntu/Debian
apt install awscli

# Or using snap
snap install aws-cli --classic

4.3 Configure AWS CLI

export AWS_ACCESS_KEY_ID=ZEGZ9O010SIDOGCINN0N
export AWS_SECRET_ACCESS_KEY=QeUvzv5IsGuWqEgghsa4e8Eo4pcOrMkGwR6laOv5
export AWS_DEFAULT_REGION=fsn1

4.4 Sync Files to S3

# Dry run first (to estimate time)
aws s3 sync /data/nextcloudaffenstall/data/ \
  s3://data-nextcloudaffenstall-kup6s/ \
  --endpoint-url https://fsn1.your-objectstorage.com \
  --dryrun

# Actual sync
aws s3 sync /data/nextcloudaffenstall/data/ \
  s3://data-nextcloudaffenstall-kup6s/ \
  --endpoint-url https://fsn1.your-objectstorage.com

Duration: ~1-3 hours for 13GB (depends on bandwidth)

4.5 Verify Upload

# Count local files
find /data/nextcloudaffenstall/data -type f | wc -l
# → 17596

# Count S3 objects
aws s3 ls s3://data-nextcloudaffenstall-kup6s/ \
  --recursive \
  --endpoint-url https://fsn1.your-objectstorage.com | wc -l
# → 17596

4.6 Create .ncdata File

# In S3 bucket root
echo "# Nextcloud data directory" > /tmp/.ncdata
aws s3 cp /tmp/.ncdata s3://data-nextcloudaffenstall-kup6s/.ncdata \
  --endpoint-url https://fsn1.your-objectstorage.com

Phase 5: Configure K8s Nextcloud

5.1 Get Original Config

# On Swarm host
docker exec nextcloudaffenstall_app cat /var/www/html/config/config.php > /tmp/config.php.backup

5.2 Extract Critical Settings

From config.php.backup, note these values:

$CONFIG = array (
  'instanceid' => 'ocaquvjfputv',  # CRITICAL: Must match
  'passwordsalt' => 'fzvITqllbtYPZ86DM4AtgSGnzVuKwL',  # CRITICAL
  'secret' => 'gUojlYXgnfkHepYmuon3DrLjULSSNRn9xttg8tVH0xz8TyWN',  # CRITICAL
  'version' => '31.0.13.1',  # Must match or be newer
);

5.3 Update K8s Config

# Copy config to pod
kubectl exec -n nextcloudaffenstall deploy/nextcloud -- bash -c "cat > /var/www/html/config/config-backup.php" < /tmp/config.php.backup

# Set critical values via occ
kubectl exec -n nextcloudaffenstall deploy/nextcloud -- php occ config:system:set instanceid --value='ocaquvjfputv'

kubectl exec -n nextcloudaffenstall deploy/nextcloud -- php occ config:system:set passwordsalt --value='fzvITqllbtYPZ86DM4AtgSGnzVuKwL'

kubectl exec -n nextcloudaffenstall deploy/nextcloud -- php occ config:system:set secret --value='gUojlYXgnfkHepYmuon3DrLjULSSNRn9xttg8tVH0xz8TyWN'

kubectl exec -n nextcloudaffenstall deploy/nextcloud -- php occ config:system:set installed --value=true --type=boolean

5.4 Update Database Host

# Change from pooler to direct connection (if needed)
kubectl exec -n nextcloudaffenstall deploy/nextcloud -- php occ config:system:set dbhost --value='nextcloud-postgres-rw'

5.5 Disable Maintenance Mode

kubectl exec -n nextcloudaffenstall deploy/nextcloud -- php occ maintenance:mode --off

5.6 Scan Files

# Update database with S3 files
kubectl exec -n nextcloudaffenstall deploy/nextcloud -- php occ files:scan --all

Phase 6: DNS Cutover

6.1 Update Nextcloud Domain Config

Update config.yaml:

domain: affenstall.cloud  # Change from staging to production

Regenerate and apply:

cd dp-kup/internal/nextcloud/nextcloudaffenstall
npm run synth
git add manifests/
git commit -m "feat: switch to production domain affenstall.cloud"
git push

6.2 Update DNS Records

In your DNS provider:

affenstall.cloud.           A     167.233.14.203  (Traefik LoadBalancer IP)
collabora.affenstall.cloud. A     167.233.14.203

TTL: Set to 300 (5 minutes) during migration for quick rollback.

6.3 Wait for Propagation

# Check DNS resolution
host affenstall.cloud
# → affenstall.cloud has address 167.233.14.203

# Check TLS certificate
curl -I https://affenstall.cloud/
# → HTTP/2 302 (redirect to login)

6.4 Test Site

  1. Open https://affenstall.cloud/ in browser

  2. Clear browser cache (Ctrl+Shift+R)

  3. Log in with user credentials

  4. Verify file access

  5. Test Collabora document editing

  6. Test whiteboard

Phase 7: Cleanup

7.1 Stop Swarm Instance

# On Swarm host
docker stack rm nextcloudaffenstall

# OR keep for rollback (7-day observation):
docker exec nextcloudaffenstall_app php occ maintenance:mode --on
docker-compose -f nextcloudaffenstall.yml stop

Recommendation: Keep Swarm data for 7 days in case rollback needed.

7.2 Remove Temporary Resources

# Remove MariaDB container
docker rm -f nextcloud-migrate-db

# Clean up dumps
rm /tmp/nextcloud_backup.sql

Troubleshooting

Issue: 404 Errors After DNS Cutover

Symptom: Site returns 404 despite DNS resolving correctly.

Cause: Traefik middleware annotations referencing non-existent middlewares.

Solution:

# Remove middleware annotation
kubectl annotate ingress -n nextcloudaffenstall nextcloud \
  "traefik.ingress.kubernetes.io/router.middlewares-"

Issue: 503 “No Available Server”

Symptom: Intermittent 503 errors.

Cause: Duplicate Ingress resources in different namespaces.

Solution:

# Find conflicting Ingresses
kubectl get ingress -A | grep affenstall.cloud

# Delete old Ingress
kubectl delete ingress nextcloud -n default

Issue: Internal Server Error After Login

Symptom: “Internal Server Error” when accessing files.

Cause: S3 credentials missing or empty in config.

Solution:

# Verify S3 credentials in config
kubectl exec -n nextcloudaffenstall deploy/nextcloud -- \
  php occ config:list system --private | jq '.system.objectstore.arguments'

# Should show:
# {
#   "key": "ZEGZ9O010SIDOGCINN0N",
#   "secret": "QeUvzv5IsGuWqEgghsa4e8Eo4pcOrMkGwR6laOv5"
# }

# If empty, fix Helm chart secretKeys configuration and redeploy

Issue: Files Not Appearing

Symptom: Users see “No files” despite S3 upload successful.

Cause: File scan not run after S3 migration.

Solution:

kubectl exec -n nextcloudaffenstall deploy/nextcloud -- \
  php occ files:scan --all

Rollback Procedure

If migration fails and rollback needed:

1. Re-enable Swarm Instance

# On Swarm host
docker-compose -f nextcloudaffenstall.yml start
docker exec nextcloudaffenstall_app php occ maintenance:mode --off

2. Revert DNS

affenstall.cloud. A {old_swarm_host_ip}

3. Delete K8s Resources

kubectl delete -f argoapps/dist/nextcloud-nextcloudaffenstall.k8s.yaml

Note: S3 buckets preserved (deletion policy: Orphan).

Post-Migration Checklist

  • [ ] All users can log in

  • [ ] File upload/download works

  • [ ] Collabora document editing works

  • [ ] Whiteboard functions correctly

  • [ ] Mobile app sync works

  • [ ] WebDAV clients connect successfully

  • [ ] Email notifications sending

  • [ ] Background cron jobs running

  • [ ] Backups to S3 working (check CNPG)

  • [ ] Monitoring/metrics collecting

  • [ ] Old Swarm instance stopped (keep data 7 days)

Lessons Learned

What Went Well

  • pgloader handled MariaDB→PostgreSQL migration seamlessly

  • S3 sync was straightforward with AWS CLI

  • ArgoCD automated deployment

  • Downtime under 1 hour

Challenges

  • Traefik middleware annotations caused 404s (removed)

  • Duplicate Ingress caused 503s (deleted conflicting resource)

  • S3 credentials not injected correctly (fixed Helm chart)

  • RWO volume limited to 1 replica (architectural constraint)

Improvements for Next Migration

  • Pre-test middleware configuration

  • Validate Ingress uniqueness before DNS cutover

  • Test S3 credential injection in staging

  • Document RWO vs RWX trade-offs upfront

  • Use staging domain for testing before production cutover