Where We Left Off
Part 7 was about proving, with raw bytes from etcdctl, that the industry-default Kubernetes setup stores every secret as plaintext on disk — and then closing that gap with two controls:
- Encryption at Rest — defeated the adversary who steals a disk or exfiltrates a backup (A1)
- Vault CSI Driver — eliminated the Secret object from etcd entirely, blinding any insider with
kubectl get secrets(A2)
At the end of that article, we named the threat we were deferring:
A3 is more dangerous than A1 or A2 because it operates inside the boundary we just hardened. The attacker doesn't need access to etcd. They don't need RBAC. They need one thing: a shell inside the pod.
Shells happen — through unpatched CVEs, deserialization bugs, SSRF vulnerabilities, or supply chain compromises in a single transitive dependency. This article is about what happens next, and how to stop it.
The State of the Cluster
After Part 7, the secret pipeline looks like this:
Vault ──► CSI Driver ──► /mnt/secrets/ (tmpfs) ──► Application Pod
│
└── No Kubernetes Secret created.
No etcd entry. No backup exposure.
A1 ✓ A2 ✓
The application reads credentials directly from the CSI-mounted files on every request:
def get_db_credentials(): user = open("/mnt/secrets/db-username").read().strip() password = open("/mnt/secrets/db-password").read().strip() return user, password
The codes can be find at this path: https://github.com/mirecloud/home_lab/tree/a9dcfe68edf93221a77d94034adf917aee33f238/Dynamic-secret-vault
No environment variables. No Kubernetes Secrets. The Vault audit log records every issuance. From the outside, this looks airtight.
It isn't.
Part 1 — Dynamic Secrets with Live Rotation
The first layer we added on top of the CSI mount is Vault's database secrets engine — combined with the CSI Driver's built-in secret rotation.
Together, they produce something remarkable: a credential that changes automatically every 5 minutes, without restarting the pod, without any code change in the application, without any human intervention.
How it works end-to-end
When the pod starts, the CSI Driver requests a fresh credential from Vault:
POST /v1/database/creds/n8n-dynamic-role
Vault connects to PostgreSQL as the admin account and executes in real time:
CREATE ROLE "v-kubernet-dynamic--XrqklZNhLjkrgBfY4gq9-1778257150" WITH LOGIN PASSWORD 'S-1LxxxxxxxxxxxxxxxxxxxxX' VALID UNTIL '2026-05-08 16:24:19+00'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO "v-kubernet-dynamic--XrqklZNhLjkrgBfY4gq9-1778257150";
A real PostgreSQL user, created in real time, with a hard expiration baked into the database itself. The credential is written to /mnt/secrets/ as a tmpfs mount — RAM only, scoped to the pod's mount namespace.
Before the TTL expires, the CSI Driver silently fetches a new credential from Vault, writes it atomically to the tmpfs, and drops the old PostgreSQL role. The application picks up the new value transparently on the next request — no restart, no downtime, no awareness of what just happened.
This is not theoretical. The two captures below were taken from the same running pod, hours apart, with nothing but a browser refresh between them.
Forensic proof from the live cluster
Browser refresh at 00:33:
|
Browser refresh at 00:36— same pod, no restart:
|
6 minutes. The same pod. Many credential rotations in between. Every single one of those intermediate credentials is already dead and dropped from PostgreSQL. The application never noticed.
kubectl exec -n postgres postgres-0 -- \ psql -U postgres -c "\du" | grep v-kubernet # v-kubernet-dynamic--knk9iBKfTsC5NYDToIzC-1778273950 | valid until 2026-05-08 21:04:12 # (every previous identity has already been dropped)
The Vault configuration
# 1. Activate the database secrets engine vault secrets enable database # 2. Connect Vault to PostgreSQL vault write database/config/my-postgres \ plugin_name=postgresql-database-plugin \ allowed_roles="n8n-dynamic-role" \ connection_url="postgresql://{{username}}:{{password}}@postgres.postgres.svc.cluster.local:5432/postgres?sslmode=disable" \ username="postgres" \ password="<POSTGRES_ADMIN_PASSWORD>" # 3. Rotate the admin password immediately # After this, no human knows the PostgreSQL admin credential — only Vault does. vault write -force database/config/my-postgres/rotate-root # 4. Define the dynamic role vault write database/roles/n8n-dynamic-role \ db_name=my-postgres \ creation_statements=" CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\"; " \ revocation_statements=" REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM \"{{name}}\"; DROP ROLE IF EXISTS \"{{name}}\"; " \ default_ttl="5m" \ max_ttl="15m" # 5. Policy and Kubernetes auth binding vault policy write n8n-dynamic-policy - <<EOF path "database/creds/n8n-dynamic-role" { capabilities = ["read"] } EOF vault write auth/kubernetes/role/n8n-dynamic-role \ bound_service_account_names=default \ bound_service_account_namespaces=test-dynamic \ policies=n8n-dynamic-policy \ ttl=1h
The CSI Driver rotation is enabled at install time:
helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \ --namespace kube-system \ --set syncSecret.enabled=false \ --set enableSecretRotation=true # ← this drives the automatic rotation
rotate-root matters: This single command changes the trust model permanently. The admin credential typed into the terminal is immediately replaced by one Vault generated internally. From this point on, the only entity that can authenticate to PostgreSQL as admin is Vault. There is no static credential to leak, rotate manually, or forget about.
What this means for an attacker
An attacker who exfiltrates a credential is holding something with a hard expiration — and the clock started when the pod mounted the volume, not when they read the file. In the worst case, they have 4 minutes and 59 seconds. On average, around 2 minutes.
That is a meaningful constraint. But it is not the end of the story.
Part 2 — The Gap That Rotation Cannot Close
Here is the problem that a 5-minute TTL does not solve.
If an attacker has a shell inside the pod — through any of the attack vectors named above — they can read the current credential before it rotates:
kubectl -n test-dynamic exec -it deployment/secure-test-app -- sh $ cat /mnt/secrets/db-username v-kubernet-dynamic--XrqklZNhLjkrgBfY4gq9-1778257150 $ cat /mnt/secrets/db-password S-1LxxxxxxxxxxxxxxxxxxxxX
The CSI mount is readOnly: true. The filesystem is readOnlyRootFilesystem: true. No Secret object exists in etcd. Vault's audit log shows nothing unusual — the credential was legitimately issued to this pod.
And the attacker just read it. In plain text. In under a second.
Let's be precise about the damage window. With a 5-minute TTL, the attacker has at most 300 seconds with a working credential. That is enough time to:
- Dump every row in the
publicschema - Exfiltrate data over an outbound HTTPS connection that blends with normal traffic
- Enumerate other services reachable from PostgreSQL's network segment
- Plant a trigger or stored procedure that persists beyond the credential's lifetime
This is not a flaw in Vault. It is not a flaw in the CSI Driver. It is a correct description of where their responsibility ends.
The Boundary Problem
The Vault CSI Driver enforces at the pod level. It asks: "Is this pod authorized to receive this secret?" Once the answer is yes, the secret crosses into the pod's mount namespace — and the CSI Driver's job is done.
Inside the pod, there is no enforcement. Every process running as the container's user can read every file in the mount:
Closing this gap requires enforcement at a layer that can distinguish which process is making a system call — not just which pod is running. A layer that operates below the container runtime, below any tool the attacker could modify or bypass.
That layer is the Linux kernel.
The tool that instruments it — without modifying the application, without adding a sidecar, and without any bypass available from user space — is Cilium Tetragon.
- Tetragon TracingPolicy — a kprobe on
sys_openatthat sendsSIGKILLto any unauthorized process attempting to read/mnt/secrets/. The kernel terminates it before the read completes. - Falco — forensic audit capturing process name, pod identity, exact command, and container hash for every attempt — including the ones Tetragon already killed.
- The combined result —
cat /mnt/secrets/db-passwordexits 137 while the app continues serving fresh rotated credentials.
Reproduce this yourself.
All manifests — SecretProviderClass, Deployment, Vault configuration, and the Flask application — are available at github.com/mirecloud/home_lab.
Deploy with a single kubectl apply -f secure-app.yaml. Configure Vault in six commands.
Comments
Post a Comment