Your Kubernetes Secrets
Are Not Safe.
Here's the Forensic Proof — and the Fix.
Why your Vault + ESO setup leaks plaintext credentials into etcd, and how to close the gap with Encryption at Rest and the Vault CSI Driver.
If you run HashiCorp Vault with External Secrets Operator on Kubernetes, your application secrets are stored as plaintext bytes in etcd. Any operator with kubectl get secrets, any backup of your control plane, or any disk that ever held an etcd snapshot can recover them in seconds.
This post proves it with a hexdump from a live cluster, then closes the gap with two layered controls:
- Encryption at Rest to protect against disk-level theft (Adversary A1)
- Vault CSI Driver to eliminate the
Secretobject entirely (Adversary A2)
Reproducible lab. Real bytes. No hand-waving.
The Question That Started This
A friend who runs his own homelab asked me a simple question:
I said no. I use Vault. My credentials are in an audited, access-controlled secret store.
Then I thought about it for one more second.
The honest answer was: yes, they can.
Because the moment External Secrets Operator (ESO) writes a secret into Kubernetes — which is the entire point of ESO — that secret is no longer protected by Vault. It's protected by whatever encryption and RBAC posture the cluster has, which in most clusters is none and weak.
I was pulling secrets out of a vault and dumping them into an unlocked drawer.
This article documents the gap with forensic evidence from MireCloud's etcd, and the architecture that closes it.
ClusterSecretStore, vault-backend, or mirecloud-ca-issuer look unfamiliar, start there.
Part 1 — The Threat Model
Before discussing controls, name the threats.
| Adversary | Access Vector | Defeated by |
|---|---|---|
| A1 — External attacker | Stolen disk, exfiltrated backup, compromised node filesystem | Encryption at Rest |
| A2 — Insider with read RBAC | kubectl get secrets in one or more namespaces | Vault CSI Driver |
| A3 — Compromised workload | Pod exec, sidecar, mounted ServiceAccount token | Vault dynamic secrets (Part 8) |
| A4 — Cluster admin | Full API access | Out of scope (trusted root) |
This article covers A1 and A2. A3 — what happens when the workload itself is compromised — gets its own article (Part 8), because it's a rich enough topic to deserve a dedicated treatment with Vault dynamic secrets, lease management, and runtime detection.
A4 is the trusted root. No technical control defeats a compromised cluster admin; that's an organizational problem, not a Kubernetes one.
Part 2 — The Forensic Proof: Reading Plaintext from etcd
Set up the lab namespace:
kubectl create namespace test-secret
Create a secret the same way every MireCloud workload does — Keycloak admin password, GitLab database credential, PgAdmin admin password all follow this exact pattern:
kubectl -n test-secret create secret generic my-db-password \
--from-literal=password='MireCloud2026!'
Read it back through the API. The data is base64-encoded, which decodes in one line:
kubectl -n test-secret get secret my-db-password -o yaml
# data.password: TWlyZUNsb3VkMjAyNiE=
echo 'TWlyZUNsb3VkMjAyNiE=' | base64 -d
# MireCloud2026!
That's already a problem for A2 — anyone with get secrets sees plaintext. But the real horror is what happens when we skip the API server entirely and read directly from etcd's storage backend.
The hexdump
On the control plane node, using the etcd client certificates that ship with every kubeadm-provisioned cluster:
sudo ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets/test-secret/my-db-password | hexdump -C
Live output from MireCloud's node-4:
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 74 65 73 74 2d 73 65 63 72 65 74 2f 6d 79 |s/test-secret/my|
00000020 2d 64 62 2d 70 61 73 73 77 6f 72 64 0a 6b 38 73 |-db-password.k8s|
00000030 00 0a 0c 0a 02 76 31 12 06 53 65 63 72 65 74 12 |.....v1..Secret.|
00000040 e4 01 0a bd 01 0a 0e 6d 79 2d 64 62 2d 70 61 73 |.......my-db-pas|
00000050 73 77 6f 72 64 12 00 1a 0b 74 65 73 74 2d 73 65 |sword....test-se|
00000060 63 72 65 74 22 00 2a 24 62 31 33 61 63 63 32 39 |cret".*$b13acc29|
00000070 2d 37 33 63 32 2d 34 39 61 33 2d 62 34 36 36 2d |-73c2-49a3-b466-|
00000080 31 37 62 36 38 39 61 33 30 38 39 65 32 00 38 00 |17b689a3089e2.8.|
00000090 42 08 08 d0 d4 8f cf 06 10 00 8a 01 65 0a 0e 6b |B...........e..k|
000000a0 75 62 65 63 74 6c 2d 63 72 65 61 74 65 12 06 55 |ubectl-create..U|
000000b0 70 64 61 74 65 1a 02 76 31 22 08 08 d0 d4 8f cf |pdate..v1"......|
000000c0 06 10 00 32 08 46 69 65 6c 64 73 56 31 3a 31 0a |...2.FieldsV1:1.|
000000d0 2f 7b 22 66 3a 64 61 74 61 22 3a 7b 22 2e 22 3a |/{"f:data":{".":|
000000e0 7b 7d 2c 22 66 3a 70 61 73 73 77 6f 72 64 22 3a |{},"f:password":|
000000f0 7b 7d 7d 2c 22 66 3a 74 79 70 65 22 3a 7b 7d 7d |{}},"f:type":{}}|
00000100 42 00 12 1a 0a 08 70 61 73 73 77 6f 72 64 12 0e |B.....password..|
00000110 4d 69 72 65 43 6c 6f 75 64 32 30 32 36 21 1a 06 |MireCloud2026!..|
00000120 4f 70 61 71 75 65 1a 00 22 00 0a |Opaque.."..|
What's visible
| Offset | Content | Sensitivity |
|---|---|---|
0x00–0x2B | /registry/secrets/test-secret/my-db-password | etcd key path |
0x46–0x54 | my-db-password | Secret name |
0x106–0x10E | password | Field name |
0x110–0x11D | MireCloud2026! | Password — plaintext ASCII |
0x122–0x127 | Opaque | Secret type |
This is the industry default. Every Kubernetes cluster without explicit encryption-at-rest configuration stores every Secret this way. That includes every secret ESO writes for every workload in every namespace. In MireCloud prior to this hardening: Keycloak admin password, Grafana OIDC client secret, GitLab OmniAuth config, PgAdmin admin password, ExternalDNS TSIG key — all of them recoverable from a single
etcdctl get.
Vault is not involved in this read. Vault's audit log will not record it. The attacker did not need to compromise Vault, did not need a valid Kubernetes token, did not even need to talk to the API server.
Part 3 — Closing Window #1: Encryption at Rest
The control
Kubernetes supports transparent encryption of Secret objects through an EncryptionConfiguration consumed by kube-apiserver. The apiserver encrypts on write, decrypts on read. API clients see no difference. The on-disk representation in etcd becomes opaque ciphertext.
Provider selection (this matters)
The Kubernetes ecosystem supports several encryption providers. Three matter in 2026:
aescbc— AES-CBC with PKCS#7 padding. Well-supported historically, but deprecated since Kubernetes v1.28 in favor of AES-GCM. Don't use for new deployments.aesgcm— AES-GCM, authenticated encryption. The right choice for local-key encryption. Requires disciplined key rotation.kmsv2 — Envelope encryption backed by an external KMS. The production-grade option. The MireCloud target is Vault Transit as the KMS provider — Vault, which already serves secrets to the cluster, becomes the root of trust for the cluster's own Secret encryption.
The output below uses aescbc because it produces the most legible hexdump for educational purposes. The conclusions hold identically for aesgcm.
Configuration
Generate the key:
head -c 32 /dev/urandom | base64
Write the encryption config to the control plane node (/etc/kubernetes/enc/enc-config.yaml):
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <BASE64_ENCODED_32_BYTE_KEY>
- identity: {}
The identity: {} provider at the end is a migration requirement, not a security hole. It allows the apiserver to read secrets that were written before encryption was enabled. Remove it once every existing secret has been re-encrypted.
Wire the flag into /etc/kubernetes/manifests/kube-apiserver.yaml:
- --encryption-provider-config=/etc/kubernetes/enc/enc-config.yaml
Add the corresponding hostPath mount. The kubelet detects the manifest change and restarts the apiserver automatically (~30 seconds).
Re-write existing secrets
Encryption applies on write. Existing secrets remain plaintext until touched. For the lab:
kubectl -n test-secret delete secret my-db-password
kubectl -n test-secret create secret generic my-db-password \
--from-literal=password='MireCloud2026!'
For production, force a rewrite of every Secret in the cluster:
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
Verification: same command, radically different bytes
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 74 65 73 74 2d 73 65 63 72 65 74 2f 6d 79 |s/test-secret/my|
00000020 2d 64 62 2d 70 61 73 73 77 6f 72 64 0a 6b 38 73 |-db-password.k8s|
00000030 3a 65 6e 63 3a 61 65 73 63 62 63 3a 76 31 3a 6b |:enc:aescbc:v1:k|
00000040 65 79 31 3a 6e df 01 f4 02 bf da ef de 8d c9 cb |ey1:n...........|
00000050 ad f7 e0 36 80 28 17 42 48 62 2c ba 5f 09 43 88 |...6.(.BHb,._.C.|
00000060 c5 ff c7 59 33 c4 97 bb bb f8 bf 74 04 8f ca 80 |...Y3......t....|
00000070 fc 74 fd 45 a9 0b b1 e4 2b f7 00 3c de fc 79 23 |.t.E....+..<..y#|
00000080 fb 51 91 21 6e d5 b7 22 1e c9 48 65 01 af f1 4d |.Q.!n.."..He...M|
...
Three structural changes:
- The etcd key path is unchanged. Keys are not encrypted — they are the storage index. Secret names remain visible to anyone with etcd access. This is a property of the control, not a flaw.
- The provider marker
k8s:enc:aescbc:v1:key1:appears. The apiserver uses this to identify which key decrypts the payload, enabling key rotation. - The body is opaque ciphertext. No
password, noOpaque, noMireCloud2026!, nokubectl-create. The entire serialized Secret is one encrypted blob.
What encryption at rest does NOT do
The apiserver still decrypts transparently on read. Anyone with get secrets RBAC still sees plaintext. Adversary A2 is unaffected.
This is the gap Part 4 closes.
Part 4 — Closing Window #2: The Vault CSI Driver
To defeat A2, we eliminate the Secret object entirely. The Secrets Store CSI Driver inverts the lifecycle: instead of an operator pushing secrets into etcd, the kubelet pulls the secret directly from Vault during pod creation and mounts it as a tmpfs inside the pod's mount namespace.
The secret never touches etcd. It exists only in volatile memory, only inside the pod that needs it.
The Helm installation trap
Installing the core driver is straightforward. Installing the HashiCorp Vault provider is where most engineers waste an afternoon — HashiCorp does not ship a standalone chart for it. You use the main vault chart with everything but the CSI sidecar disabled.
# 1. The Kubernetes-SIGS CSI driver
helm repo add secrets-store-csi-driver \
https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
helm install csi-secrets-store \
secrets-store-csi-driver/secrets-store-csi-driver \
--namespace kube-system \
--set syncSecret.enabled=false \
--set enableSecretRotation=true
# 2. The HashiCorp Vault provider — same chart as the Vault server,
# with everything except the CSI sidecar disabled
helm repo add hashicorp https://helm.releases.hashicorp.com
helm install vault-csi hashicorp/vault \
--namespace kube-system \
--set server.enabled=false \
--set injector.enabled=false \
--set csi.enabled=true
For MireCloud, both deployments are wrapped as ArgoCD Applications under infrastructure/secrets-store-csi/ and infrastructure/vault-csi-provider/ — same wrapper pattern as every other component since Part 1.
syncSecret.enabled=falseis critical. Withtrue, the CSI driver also creates a KubernetesSecretmirror of the mounted file — defeating the entire point of bypassing etcd. Leave it off.
Define the SecretProviderClass
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: vault-test-secret
namespace: test-secret
spec:
provider: vault
parameters:
vaultAddress: "http://vault.vault.svc.cluster.local:8200"
roleName: "test-secret-role"
objects: |
- objectName: "app-password"
secretPath: "secret/data/test/config"
secretKey: "password"
This is the identical Vault service URL used by ESO in infrastructure/external-secrets-config/secret-store.yaml — consistency across the secret pipeline.
Deploy the workload
apiVersion: v1
kind: Pod
metadata:
name: security-tester
namespace: test-secret
spec:
serviceAccountName: default
containers:
- name: alpine
image: alpine:latest
command: ["sh", "-c", "sleep infinity"]
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "vault-test-secret"
Part 5 — SRE Realities: Troubleshooting "Permission Denied"
In a real bare-metal deployment, this rarely works on the first apply. On MireCloud's first run, the pod hung in ContainerCreating. kubectl describe pod revealed:
Warning FailedMount ... MountVolume.SetUp failed for volume "secrets-store-inline":
failed to login: Error making API request.
URL: POST http://vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/login
Code: 400. Errors: * invalid role name "test-secret-role"
This is the Zero Trust architecture working exactly as designed. Cluster DNS routed the CSI driver to Vault. Vault rejected the mount because the specific Kubernetes role had not been authorized.
The fix is to create the policy and role inside Vault. MireCloud already has Kubernetes auth enabled for the vault-backend role used by ESO since Part 1 — adding a CSI role follows the same pattern:
kubectl -n vault exec -ti vault-0 -- sh
vault policy write test-secret-policy - <<'EOF'
path "secret/data/test/config" {
capabilities = ["read"]
}
EOF
vault write auth/kubernetes/role/test-secret-role \
bound_service_account_names=default \
bound_service_account_namespaces=test-secret \
policies=test-secret-policy \
ttl=1h
vault kv put secret/test/config password='MireCloud2026!'
The kubelet's exponential backoff loop unblocks within seconds. The pod transitions to Running.
Why this is a feature, not a bug. Vault refused a mount until a policy explicitly authorized it. That is the entire promise of identity-bound secret access. The error was not "the system is broken" — it was "the system is doing exactly what you asked it to."
Part 6 — Forensic Verification: The Application View
Querying the Kubernetes API yields nothing. Adversary A2 is blinded:
kubectl -n test-secret get secret
# No resources found in test-secret namespace.
sudo ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
get /registry/secrets/test-secret/app-password
# (no output — the key does not exist)
But drop into the pod itself, and the secret is there:
kubectl -n test-secret exec -it security-tester -- /bin/sh
/ # ls -la /mnt/secrets/
drwxrwxrwt 3 root root 120 Apr 19 14:22 .
drwxr-xr-x 1 root root 4096 Apr 19 14:22 ..
drwxr-xr-x 2 root root 80 Apr 19 14:22 ..2026_04_19_14_22_18.2602020304
lrwxrwxrwx 1 root root 32 Apr 19 14:22 ..data -> ..2026_04_19_14_22_18.2602020304
lrwxrwxrwx 1 root root 19 Apr 19 14:22 app-password -> ..data/app-password
/ # cat /mnt/secrets/app-password
MireCloud2026!
/ # df -h /mnt/secrets
Filesystem Size Used Available Use% Mounted on
tmpfs 7.7G 4.0K 7.7G 0% /mnt/secrets
Two details worth pausing on:
1. The ..data/ symlink pattern
app-password is not a flat file — it is a symlink pointing at ..data/app-password, which itself points at a timestamped directory. This is the same atomic-update pattern the kubelet uses for ConfigMaps and projected volumes. When Vault rotates the secret, the CSI driver writes the new value to a new timestamped directory, then atomically swaps the ..data symlink. The application reading the file never sees a partial write, never sees a missing file, never has to handle a race condition.
2. tmpfs — RAM, not disk
The filesystem is tmpfs. The secret lives in RAM, scoped to the pod's mount namespace, for the lifetime of the pod. When the pod terminates, the memory is released. There is no disk artifact to forget about. No backup that could leak it. No etcd snapshot to extract it from.
Part 7 — Decision Framework: ESO vs. CSI
Vault CSI is not a silver bullet. It introduces a hard runtime dependency: if Vault is unreachable, pods cannot start. ESO, by contrast, caches secrets in etcd, allowing pods to start during a Vault outage. This is a real availability tradeoff — particularly for MireCloud's current single-replica Vault deployment, which is acceptable for a homelab but not for production.
The framework I use:
| Consumer | Recommended Control | Rationale |
|---|---|---|
| Application pod reading a credential at startup | Vault CSI | Closes A1 + A2. No traces in etcd. Restart handles rotation. |
| Image pull secrets | ESO + Encryption at Rest | Must exist before pod creation; no pod identity yet. |
| Controllers consuming via API (cert-manager, GitLab OmniAuth) | ESO + Encryption + RBAC | No CSI support; residual A2 compensated by RBAC. |
| TLS keys for Cilium Gateway listeners | ESO + Encryption at Rest | Consumed by the gateway via the API, not by a pod filesystem. |
| Break-glass admin credentials | Vault directly, out-of-band | Never sync to the cluster. |
The principle: Secret objects are the exception, not the default. Every Secret requires explicit justification.
Closing
Hardening a cluster is not about finding one tool that solves everything. It is about layering controls until the cost of an attack exceeds the value of the prize.
By combining Encryption at Rest to neutralize physical infrastructure threats (A1) and the Secrets Store CSI Driver to eliminate internal API exposure (A2), the MireCloud architecture closes the gap between Vault and the workload.
The forensic evidence in Part 2 is the argument. Any engineer who has not personally extracted a plaintext secret from their own cluster's etcd should do so before accepting their current design as secure.
If you only take one thing from this article, take this:
run the etcdctl get | hexdump -C against your own cluster. Today. Before reading another article.
🔮 What's Next: Part 8 — Defeating the Compromised Workload
This article handled A1 and A2 — the two adversaries outside the workload. Part 8 tackles A3: what happens when the application pod itself is compromised.
CSI mounts protect the secret on the way to the pod, but they don't protect it once the attacker has shell access inside the pod. A static password is a static password — readable by anyone who can cat the file.
Part 8 will cover:
- Vault dynamic secrets — the database secrets engine generating per-pod, TTL-bound PostgreSQL credentials revoked on lease expiry
- Lease management and audit — proving in Vault's UI that the credential the attacker stole has already expired
- Tetragon TracingPolicy — runtime detection of
cat /mnt/secrets/*andkubectl execpatterns - NetworkPolicy lockdown — preventing exfiltration even when the pod is compromised
Follow me on Medium and LinkedIn to be notified when it publishes.
The complete repository, including the infrastructure/secrets-store-csi/ and infrastructure/vault-csi-provider/ ArgoCD Applications introduced in this post, is available at github.com/mirecloud/home_lab.
If this article saved you from an audit finding — or worse, an actual breach — the kindest thing you can do is share it with the engineer on your team who hasn't run that etcdctl command yet.
Comments
Post a Comment