Reference
Crossplane S3 ProviderConfig¶
Complete reference documentation for the Crossplane AWS S3 ProviderConfig configured for Hetzner Object Storage.
Overview¶
The hetzner-s3 ProviderConfig enables Crossplane’s Upbound AWS S3 provider to work with Hetzner Object Storage (S3-compatible storage). This configuration is infrastructure-managed via OpenTofu and deployed through the kube-hetzner module.
Infrastructure Location:
Template: kube-hetzner/extra-manifests/50-B-crossplane-provider-s3.yaml.tpl
Applied by: OpenTofu during
tofu applyNamespace:
crossplane-system
Complete Configuration¶
apiVersion: aws.upbound.io/v1beta1
kind: ProviderConfig
metadata:
name: hetzner-s3
namespace: crossplane-system
spec:
credentials:
source: Secret
secretRef:
name: hetzner-s3-creds
namespace: crossplane-system
key: credentials
# AWS-specific validations that won't work with Hetzner
skip_credentials_validation: true
skip_metadata_api_check: true
skip_requesting_account_id: true
skip_region_validation: true # Allow Hetzner regions (fsn1, nbg1, hel1)
# Enable path-style S3 access (required for S3-compatible storage)
s3_use_path_style: true
# Custom endpoint for Hetzner Object Storage (production region)
endpoint:
hostnameImmutable: true # Required for S3-compatible storage
partitionId: aws # Force AWS partition to bypass validation
services:
- s3 # CRITICAL: Apply custom endpoint to S3 service
url:
type: Static
static: https://fsn1.your-objectstorage.com
Configuration Fields¶
Credentials¶
credentials:
source: Secret
secretRef:
name: hetzner-s3-creds
namespace: crossplane-system
key: credentials
Description: References Kubernetes Secret containing S3 access credentials.
Secret Format (INI):
[default]
aws_access_key_id = <HETZNER_S3_ACCESS_KEY>
aws_secret_access_key = <HETZNER_S3_SECRET_KEY>
Managed By: OpenTofu (created from .env variables)
TF_VAR_hetzner_s3_access_keyTF_VAR_hetzner_s3_secret_key
Important:
Must use INI format (not JSON)
Same credentials used for etcd backups and Loki
Skip Flags¶
skip_credentials_validation¶
skip_credentials_validation: true
Required: Yes Default: false Purpose: Bypasses AWS STS API validation that won’t work with Hetzner S3 Why Needed: Hetzner doesn’t implement AWS STS GetCallerIdentity API
skip_metadata_api_check¶
skip_metadata_api_check: true
Required: Yes Default: false Purpose: Skips AWS EC2 Metadata API verification Why Needed: Kubernetes pods don’t have access to AWS metadata service
skip_requesting_account_id¶
skip_requesting_account_id: true
Required: Yes Default: false Purpose: Prevents provider from requesting AWS account ID Why Needed: Account ID concept doesn’t exist in Hetzner S3
skip_region_validation¶
skip_region_validation: true
Required: Yes Default: false Purpose: Allows Hetzner region codes (fsn1, nbg1, hel1) instead of AWS regions Why Needed: Hetzner uses custom region codes, not AWS region naming
S3 Configuration¶
s3_use_path_style¶
s3_use_path_style: true
Required: Yes
Default: false
Purpose: Enables path-style S3 access (https://endpoint/bucket/key instead of https://bucket.endpoint/key)
Why Needed: Required for S3-compatible storage services
Endpoint Configuration¶
hostnameImmutable¶
endpoint:
hostnameImmutable: true
Required: Yes Default: false Purpose: Prevents AWS SDK from modifying endpoint hostname based on bucket name Why Needed: S3-compatible storage doesn’t support virtual-hosted-style auto-hostname modification Effect: Endpoint stays exactly as specified
partitionId¶
endpoint:
partitionId: aws
Required: Yes
Default: Auto-detected
Purpose: Forces AWS partition to bypass validation
Why Needed: Provider validates partition based on region; custom regions would fail without this
Valid Values: aws, aws-cn, aws-us-gov
services¶
endpoint:
services:
- s3
Required: CRITICAL - Yes
Default: Empty (all services)
Purpose: Specifies which AWS services should use the custom endpoint
Why Needed: Without this, provider ignores custom endpoint for S3 and tries to connect to s3.{region}.amazonaws.com
Critical: This is the most important field - without it, Crossplane will fail to connect
url¶
endpoint:
url:
type: Static
static: https://fsn1.your-objectstorage.com
Required: Yes
Type: Static (fixed URL) or Dynamic (constructed from variables)
Format: Must include https:// protocol prefix
Regional Endpoints:
https://fsn1.your-objectstorage.com- Falkenstein (production)https://nbg1.your-objectstorage.com- Nürnberghttps://hel1.your-objectstorage.com- Helsinki (DR)
Important:
Endpoint URL is stored in
.envfile variableTF_VAR_production_s3_endpointAlready includes
https://prefix - don’t add it again in template
Supported Regions¶
Hetzner Object Storage regions that can be used in Bucket resources:
Region Code |
Location |
Purpose |
Endpoint |
|---|---|---|---|
|
Falkenstein, Germany |
Production (default) |
|
|
Nürnberg, Germany |
Production alternative |
|
|
Helsinki, Finland |
Disaster Recovery |
|
Recommendation: Use fsn1 for production workloads (same region as cluster for lowest latency).
Bucket Naming Convention¶
CRITICAL: Hetzner S3 bucket names are globally unique across all Hetzner customers (like AWS S3). You cannot use generic names like “backup”, “logs”, or “artifacts” - they will fail with “BucketAlreadyExists” error because other customers have already claimed them.
Naming Schema: LOCALPART-NAMESPACE-kup6s
Components:
LOCALPART: Descriptive purpose (e.g., “gitlab-artifacts”, “uploads”, “backups”, “logs-loki”)
NAMESPACE: Kubernetes namespace (optional for infrastructure buckets)
kup6s: Project identifier suffix (mandatory)
Examples:
# Application bucket
metadata:
name: gitlab-artifacts-gitlabbda-kup6s
# Infrastructure bucket
metadata:
name: backup-etcd-kup6s
# Another application bucket
metadata:
name: uploads-myapp-production-kup6s
Rule: Always suffix bucket names with -kup6s to ensure global uniqueness and avoid collisions.
Error Example:
Error: creating S3 Bucket (my-bucket): BucketAlreadyExists:
The requested bucket name is not available. The bucket namespace is shared by
all users of the system. Please select a different name and try again.
Known Limitations¶
Tagging Not Supported¶
Issue: Hetzner S3 doesn’t implement the PutBucketTagging API (returns HTTP 501 Not Implemented).
Solution: Use managementPolicies to skip Update operations:
apiVersion: s3.aws.upbound.io/v1beta1
kind: Bucket
metadata:
name: data-myapp-kup6s # Following naming convention: LOCALPART-NAMESPACE-kup6s
spec:
deletionPolicy: Delete
managementPolicies:
- Observe # Monitor bucket state
- Create # Create bucket
- Delete # Delete bucket
# Skip Update/LateInitialize to avoid tagging operations
forProvider:
region: fsn1
providerConfigRef:
name: hetzner-s3
Result with managementPolicies:
Status:
SYNCED=True,READY=True(clean status)No tagging errors
Bucket fully functional
Result WITHOUT managementPolicies:
Status:
SYNCED=False,READY=TrueError:
operation error S3: PutBucketTagging, StatusCode: 501, NotImplementedBucket functional but status unclean
Fix existing buckets:
kubectl patch bucket.s3.aws.upbound.io <bucket-name> \
-p '{"spec":{"managementPolicies":["Observe","Create","Delete"]}}' --type=merge
AWS-Specific Features Not Supported¶
The following AWS S3 features are not available in Hetzner Object Storage:
Bucket tagging (PutBucketTagging, GetBucketTagging)
Server-side encryption configuration (SSE-KMS)
Object Lock (WORM)
S3 Lifecycle rules
Bucket replication
Bucket metrics
Bucket analytics
S3 Inventory
S3 Select
Supported Features:
Basic bucket operations (create, delete, list)
Object operations (put, get, delete, list)
Multipart uploads
Bucket versioning
CORS configuration
Bucket policies (limited)
Pre-signed URLs
Troubleshooting¶
Error: “does not belong to a known partition”¶
Full Error:
cannot initialize the Terraform plugin SDK async external client:
cannot get terraform setup: cannot configure AWS partition:
managed resource region "fsn1" does not belong to a known partition
Cause: Missing or incorrect ProviderConfig fields
Solution: Ensure ProviderConfig has:
skip_region_validation: truepartitionId: aws
Error: “lookup s3.fsn1.amazonaws.com: no such host”¶
Full Error:
request send failed, Put "https://s3.fsn1.amazonaws.com/bucket-name":
dial tcp: lookup s3.fsn1.amazonaws.com on 10.43.0.10:53: no such host
Cause: Custom endpoint not applied to S3 service
Solution: Add services: [s3] to endpoint configuration
Critical: This is the most common issue - the services field is required
Error: “InvalidAccessKeyId: The AWS Access Key Id you provided does not exist”¶
Cause: Invalid or expired S3 credentials
Solution:
Verify credentials in Hetzner Cloud Console (Object Storage → Access Keys)
Ensure credentials match those in
.envfileRegenerate credentials if needed
Run
tofu applyto update secret in clusterRestart provider pod:
kubectl delete pod -n crossplane-system -l pkg.crossplane.io/provider=provider-aws-s3
Error: “static credentials are empty”¶
Cause: Secret in wrong format (JSON instead of INI)
Solution: Secret must use INI format:
[default]
aws_access_key_id = xxx
aws_secret_access_key = yyy
Not JSON format:
{"aws_access_key_id":"xxx","aws_secret_access_key":"yyy"}
Updating the Configuration¶
Modify ProviderConfig¶
Edit infrastructure template:
nano kube-hetzner/extra-manifests/50-B-crossplane-provider-s3.yaml.tplApply changes (use the mandatory apply script):
cd kube-hetzner bash scripts/apply-and-configure-longhorn.sh
Restart provider pod (optional, for immediate effect):
kubectl delete pod -n crossplane-system -l pkg.crossplane.io/provider=provider-aws-s3
Rotate S3 Credentials¶
Generate new credentials in Hetzner Cloud Console
Navigate to: Object Storage → Access Keys
Click: Create Access Key
Copy: Access Key ID and Secret Access Key
Update
.envfile:TF_VAR_hetzner_s3_access_key="<NEW_ACCESS_KEY>" TF_VAR_hetzner_s3_secret_key="<NEW_SECRET_KEY>"
Apply changes:
cd kube-hetzner source .env tofu apply
Restart provider pod:
kubectl delete pod -n crossplane-system -l pkg.crossplane.io/provider=provider-aws-s3Verify: New buckets should create successfully
Version Information¶
Provider: upbound/provider-aws-s3
Version: v2.1.1 (as of 2025-10-25)
API Version: aws.upbound.io/v1beta1
Installed By: Crossplane (infrastructure-managed)