How I set up automated cloud backups for my homelab Kubernetes cluster using MinIO and Scaleway, while avoiding US and German cloud providers.
The Problem
I run a Kubernetes homelab with PostgreSQL and ImmuDB databases. Daily backups run via CronJobs and store compressed dumps in MinIO (self-hosted S3-compatible storage). But what happens if my server dies? All my backups would be gone.
I needed offsite cloud backup, but wanted to avoid:
- US tech companies (AWS, Google Cloud, Azure, Backblaze)
- German providers (Hetzner)
Choosing a Provider
After researching European S3-compatible providers, I narrowed it down to:
| Provider | Country | Storage/GB/mo | Egress | Notes |
|---|---|---|---|---|
| Scaleway | France | €0.012 | ~€0.01/GB | Best docs, most AWS-like |
| Infomaniak | Switzerland | €0.01 | 10TB free | Strongest privacy laws |
| OVHcloud 3-AZ | France | €0.014 | Free | Best for frequent restores |
I chose Scaleway because:
- Excellent documentation and developer experience
- Broad service offering (Kubernetes, databases, serverless) for future projects
- 75GB free for 90 days
- Based in France with EU data sovereignty
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ PostgreSQL │ │ ImmuDB │ │ Loki │ │
│ │ Backup │ │ Backup │ │ Logs │ │
│ │ CronJob │ │ CronJob │ │ │ │
│ │ (2 AM) │ │ (3 AM) │ │ │ │
│ └──────┬───────┘ └──────┬───────┘ └──────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ MinIO │ │
│ │ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ postgres- │ │ immudb- │ │ │
│ │ │ backups │ │ backups │ │ │
│ │ └─────────────┘ └─────────────┘ │ │
│ └──────────────────┬──────────────────┘ │
│ │ │
│ ┌──────────────────┴──────────────────┐ │
│ │ Scaleway Sync CronJob │ │
│ │ (4 AM) │ │
│ └──────────────────┬──────────────────┘ │
│ │ │
└─────────────────────┼───────────────────────────────────────┘
│
▼ mc mirror
┌─────────────────────────────────────────────────────────────┐
│ Scaleway Object Storage │
│ (fr-par) │
│ ┌─────────────────────────────────────┐ │
│ │ minoko-backups │ │
│ │ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ postgres/ │ │ immudb/ │ │ │
│ │ └─────────────┘ └─────────────┘ │ │
│ └─────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Setup Guide
Step 1: Create Scaleway Account and Bucket
- Sign up at console.scaleway.com
- Go to Object Storage → Create bucket
- Name:
your-backups(must be globally unique) - Region:
fr-par(Paris) ornl-ams(Amsterdam) - Visibility: Private
- Name:
- Activate the free trial (750GB for 90 days)
Step 2: Create Scoped API Credentials
Don’t use your personal API key - create a dedicated service account with minimal permissions:
IAM → Applications → Create Application
- Name:
k8s-backup-sync
- Name:
IAM → Policies → Create Policy
- Name:
object-storage-backup - Scope: Your project
- Rules:
ObjectStorageFullAccess
- Name:
Attach policy to application
- Go to application → Policies → Attach
Generate API key
- Application → API Keys → Generate
- Save the Access Key and Secret Key
Step 3: Create Kubernetes Secret
kubectl create secret generic scaleway-s3-credentials \
--from-literal=access-key=YOUR_ACCESS_KEY \
--from-literal=secret-key=YOUR_SECRET_KEY \
-n minio
Step 4: Enable Bucket Versioning
Versioning protects against accidental deletions and overwrites:
# Install mc (MinIO client)
brew install minio/stable/mc # macOS
# or download from https://min.io/download
# Configure aliases
mc alias set minio http://your-minio:9000 MINIO_USER MINIO_PASS
mc alias set scaleway https://s3.fr-par.scw.cloud ACCESS_KEY SECRET_KEY
# Enable versioning
mc version enable minio/postgres-backups
mc version enable minio/immudb-backups
mc version enable scaleway/your-backups
Step 5: Initial Sync
Sync existing backups to Scaleway:
mc mirror minio/postgres-backups scaleway/your-backups/postgres/
mc mirror minio/immudb-backups scaleway/your-backups/immudb/
Step 6: Create Sync CronJob
Create scaleway-sync-cronjob.yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
name: scaleway-backup-sync
namespace: minio
spec:
schedule: "0 4 * * *" # Daily at 4 AM (after local backups)
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
ttlSecondsAfterFinished: 86400
template:
spec:
restartPolicy: OnFailure
containers:
- name: sync
image: minio/mc:latest
command:
- /bin/sh
- -c
- |
set -e
echo "Configuring MinIO client..."
mc alias set minio http://minio.minio.svc.cluster.local:9000 "$MINIO_ACCESS_KEY" "$MINIO_SECRET_KEY"
mc alias set scaleway https://s3.fr-par.scw.cloud "$SCALEWAY_ACCESS_KEY" "$SCALEWAY_SECRET_KEY"
echo "Syncing postgres backups to Scaleway..."
mc mirror --overwrite minio/postgres-backups scaleway/your-backups/postgres/
echo "Syncing immudb backups to Scaleway..."
mc mirror --overwrite minio/immudb-backups scaleway/your-backups/immudb/
echo "Cleaning up old backups on Scaleway (older than 14 days)..."
mc rm --older-than 14d --recursive --force scaleway/your-backups/postgres/ || true
mc rm --older-than 14d --recursive --force scaleway/your-backups/immudb/ || true
echo "Current backups on Scaleway:"
mc ls scaleway/your-backups/postgres/
mc ls scaleway/your-backups/immudb/
echo "Sync completed successfully!"
env:
- name: MC_CONFIG_DIR
value: /tmp/.mc
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-credentials
key: rootUser
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-credentials
key: rootPassword
- name: SCALEWAY_ACCESS_KEY
valueFrom:
secretKeyRef:
name: scaleway-s3-credentials
key: access-key
- name: SCALEWAY_SECRET_KEY
valueFrom:
secretKeyRef:
name: scaleway-s3-credentials
key: secret-key
resources:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 256Mi
cpu: 200m
Apply it:
kubectl apply -f scaleway-sync-cronjob.yaml
Step 7: Test the Sync
# Trigger a manual sync
kubectl create job sync-test --from=cronjob/scaleway-backup-sync -n minio
# Watch the logs
kubectl logs -f job/sync-test -n minio
# Verify backups on Scaleway
mc ls scaleway/your-backups/postgres/
mc ls scaleway/your-backups/immudb/
Why mc mirror Instead of Native Replication?
I initially tried MinIO’s native bucket replication, but it failed with Scaleway:
mc: <ERROR> unable to configure remote target. Remote service connection error
(Remote service endpoint not available. Health check timed out after 3 seconds).
MinIO’s native replication requires specific health check APIs that S3-compatible providers don’t always implement. mc mirror is more universal and works with any S3-compatible storage.
For daily database backups, scheduled mirroring is actually a better fit than real-time replication anyway - it’s simpler, more debuggable, and costs less in API calls.
Retention Strategy
| Location | Retention | Reason |
|---|---|---|
| Local MinIO | 7 days | Quick restores, limited disk space |
| Scaleway | 14 days | Disaster recovery, cheap storage |
The sync job runs mc rm --older-than 14d to automatically clean up old backups on Scaleway.
Restoring from Scaleway
If disaster strikes:
# List available backups
mc ls scaleway/your-backups/postgres/
# Download a specific backup
mc cp scaleway/your-backups/postgres/postgresql-backup-20251227-020023.sql.gz /tmp/
# Or restore entire bucket to MinIO
mc mirror scaleway/your-backups/postgres/ minio/postgres-backups/
# Restore PostgreSQL
gunzip -c /tmp/postgresql-backup-20251227-020023.sql.gz | psql -h localhost -U postgres
Cost
With ~100KB of daily backups and 14-day retention:
| Item | Cost |
|---|---|
| Storage (~1.5MB) | ~€0.00002/month |
| API calls | Free tier |
| Egress (restores) | ~€0.01/GB when needed |
Effectively free for small homelab backups.
Monitoring
Check CronJob status:
# View recent jobs
kubectl get jobs -n minio | grep scaleway
# Check CronJob schedule
kubectl get cronjob scaleway-backup-sync -n minio
# View logs from last run
kubectl logs job/$(kubectl get jobs -n minio -o name | grep scaleway | tail -1 | cut -d/ -f2) -n minio
Conclusion
For under €1/month (realistically free for small backups), I now have:
- Automated daily offsite backups
- 14-day point-in-time recovery
- Data stored in France under EU jurisdiction
- Scoped credentials with minimal permissions
- Simple
mc mirrorbased sync that just works
The whole setup took about 30 minutes and gives me peace of mind that my homelab data survives even total hardware failure.