Object storage encryption at rest in homelab setups

Encryption without object lock is a half-measure. A misconfigured script or compromised service account can still wipe everything, which is why S3-compatible storage on a homelab needs layered defence: encryption at rest, API-enforced write and delete rules, and backups that survive the worst realistic failure.

S3-compatible object storage on a homelab: ACLs, bucket policies, and why you need immutable backups

Encryption without object lock is a half-measure. You can encrypt every object in a MinIO bucket and still lose the lot if a misconfigured script, a compromised service account, or a runaway mc rm --recursive gets there first. Getting S3-compatible storage right on a homelab means thinking about three distinct layers: how data is encrypted at rest, how the API enforces write and delete rules, and whether the backup survives contact with the worst thing that can realistically happen.

Encryption at rest: SSE-S3 versus SSE-C and what each one actually protects

SSE-S3 and SSE-C solve different problems, and conflating them leads to gaps.

SSE-S3 encrypts each object with a data encryption key (DEK) that MinIO generates per object. That DEK is then sealed using a key encryption key (KEK) held in a Key Management System. On a single-node homelab MinIO deployment, that KMS is typically KES pointing at a local Vault instance, or the built-in minio-kes integration. If you skip the KMS entirely and rely on MinIO’s internal key store, the root keys live on the same node as the data. That arrangement protects against someone pulling the drives and walking away; it does not protect against full OS compromise.

SSE-C moves key responsibility to the client. The calling application sends the encryption key in the request header over TLS, MinIO uses it to encrypt the object, then discards it immediately. MinIO never stores the key. The consequence: if you lose the key, the object is unreadable permanently. On a homelab network where TLS termination may be misconfigured, self-signed, or handled by a reverse proxy that logs headers, SSE-C is high risk unless TLS is verified end-to-end with no inspection in the path. Check that your MinIO endpoint uses a cert your clients actually validate, not one you’ve added to a trust store and forgotten.

The DEK lifecycle in SSE-S3 works as follows: MinIO generates a random DEK per object, encrypts it with the KEK from KES, and stores the sealed DEK alongside the object as part of its metadata. On read, MinIO requests the KEK from KES, unseals the DEK, and decrypts the object in memory. The object data on disk is always ciphertext. Run mc admin kms key status <alias> to confirm the active KMS key is reachable and unsealed; a sealed or unreachable KMS means no new objects can be written and existing objects cannot be read.

If running a KMS is genuinely out of scope for the homelab, ZFS native encryption at the dataset level is a practical substitute. Enable it at dataset creation with zfs create -o encryption=aes-256-gcm -o keylocation=prompt -o keyformat=passphrase pool/minio-data. The encryption boundary is the host, not the application, so MinIO sees plaintext, but drive-level data is protected. The trade-off is that key rotation requires a ZFS zfs change-key operation and a dataset remount; there is no per-object granularity.

For KMS-backed SSE-S3 deployments, rotate keys on a schedule you can actually keep. MinIO does not auto-rotate DEKs on existing objects when the KEK changes; it re-wraps new DEKs going forward. Run mc admin kms key create <alias> <keyname> to add a new key, then set it as the default in the MinIO environment config (MINIO_KMS_KES_KEY_NAME). Existing objects retain their original KEK reference until explicitly re-encrypted.

Bucket policies, object lock, and lifecycle rules: whether the backup survives

Encryption protects data in storage. Object lock and bucket policies protect data from deletion.

Denying unencrypted writes at the API level

A bucket policy can reject any PutObject request that does not carry a server-side encryption header. Add the following condition to the bucket policy JSON:

json
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “DenyUnencryptedPutObject”,
“Effect”: “Deny”,
“Principal”: ““,
“Action”: “s3:PutObject”,
“Resource”: “arn:aws:s3:::your-backup-bucket/
“,
“Condition”: {
“StringNotEquals”: {
“s3:x-amz-server-side-encryption”: “aws:kms”
}
}
}
] }

Apply it with mc admin policy set-json <alias> DenyUnencrypted policy.json and attach it to the bucket using mc set-bucket-policy. Any PutObject call that omits the SSE header will receive a 403. This catches misconfigured backup agents that write plaintext silently.

Object lock: governance versus compliance

Object lock requires versioning to be enabled before the bucket exists; you cannot retrofit it. Create the bucket with both flags active:

bash
mc mb –with-versioning –with-lock /immutable-backups

Two retention modes exist. Governance mode allows any account with s3:BypassGovernanceRetention to delete a locked object before the retention period expires. In a homelab context, that is almost certainly the root MinIO account, which makes governance mode only slightly better than no lock at all if the root credentials are compromised. Compliance mode removes the bypass entirely; no account, including root, can delete or overwrite an object before the retention date. Set a default retention on the bucket:

bash
mc retention set –default COMPLIANCE “30d” /immutable-backups

Thirty days is a reasonable floor for homelab backup chains. Extend it if the backup agent writes full snapshots rather than incrementals, since a corrupted base object cannot be reconstructed from incrementals once the retention window closes.

Versioning, lifecycle rules, and delete markers

Object lock in compliance mode does not prevent delete markers from accumulating. When a versioned DELETE is issued against a locked object, S3-compatible storage adds a delete marker as the current version; the locked base object remains, but the bucket fills with markers over time. A lifecycle rule can expire these without touching the locked data:

json
{
“Rules”: [
{
“ID”: “expire-delete-markers”,
“Status”: “Enabled”,
“Filter”: { “Prefix”: “” },
“Expiration”: {
“ExpiredObjectDeleteMarker”: true
}
}
] }

Apply it with mc ilm import <alias>/immutable-backups < lifecycle.json. This rule removes delete markers only when all other versions of the object are gone. Because the locked versions are still present, the markers expire but the base objects remain protected. The result is a clean current-version listing without orphaned markers consuming quota.

Service account scoping

The backup agent should never run under the MinIO root account. Create a dedicated service account with mc admin user add <alias> backup-agent <password>, then write a policy that permits s3:PutObject and s3:GetObject on the target bucket only, and explicitly denies s3:DeleteObject and s3:DeleteBucket:

json
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [“s3:PutObject”, “s3:GetObject”],
“Resource”: “arn:aws:s3:::immutable-backups/
},
{
“Effect”: “Deny”,
“Action”: [“s3:DeleteObject”, “s3:DeleteBucket”, “s3:AbortMultipartUpload”],
“Resource”: [
“arn:aws:s3:::immutable-backups”,
“arn:aws:s3:::immutable-backups/

] }
] }

Attach the policy with mc admin policy attach <alias> backup-agent-policy --user backup-agent. The backup agent can write and read, but cannot delete anything, and cannot abort a multipart upload mid-write to leave a stub. The root account credentials stay offline or in a password manager, not in any config file on the backup host.

The full picture is then: ZFS or KMS-backed encryption at the storage layer, a bucket policy that rejects unencrypted writes, compliance-mode object lock on the bucket, a lifecycle rule to keep markers tidy, and a write-only service account for the agent. Each layer covers a failure mode the others do not. Encryption does not stop deletion. Object lock does not stop data being written in plaintext. Scoped credentials do not protect against a compromised KMS. Run them together and the backup chain holds up under conditions that would otherwise mean starting from scratch.