Skip to main content

Hachicop vault Dynamic Secret - Mirecloud Homelab Part 8

MireCloud Homelab Series — Part 8
May 8, 2026 · Emmanuel Steven Catin · 12 min read
Kubernetes HashiCorp Vault Cilium Tetragon DevSecOps Zero Trust
TL;DR — Vault dynamic secrets give every pod a unique, time-limited PostgreSQL credential that rotates automatically every 5 minutes — without restarting the pod, without touching the application. That credential is gone before an attacker can do anything with it. And yet, an attacker with a shell inside the pod can read the current one in under a second. This article proves the gap on a live cluster, then closes it at the kernel level.

Where We Left Off

Part 7 was about proving, with raw bytes from etcdctl, that the industry-default Kubernetes setup stores every secret as plaintext on disk — and then closing that gap with two controls:

  • Encryption at Rest — defeated the adversary who steals a disk or exfiltrates a backup (A1)
  • Vault CSI Driver — eliminated the Secret object from etcd entirely, blinding any insider with kubectl get secrets (A2)

At the end of that article, we named the threat we were deferring:

A3 — Compromised workload. Pod exec, sidecar abuse, mounted ServiceAccount token.

A3 is more dangerous than A1 or A2 because it operates inside the boundary we just hardened. The attacker doesn't need access to etcd. They don't need RBAC. They need one thing: a shell inside the pod.

Shells happen — through unpatched CVEs, deserialization bugs, SSRF vulnerabilities, or supply chain compromises in a single transitive dependency. This article is about what happens next, and how to stop it.


The State of the Cluster

After Part 7, the secret pipeline looks like this:

architecture
Vault ──► CSI Driver ──► /mnt/secrets/ (tmpfs) ──► Application Pod
               │
               └── No Kubernetes Secret created.
                   No etcd entry. No backup exposure.
                   A1 ✓  A2 ✓

The application reads credentials directly from the CSI-mounted files on every request:

python
def get_db_credentials():
    user     = open("/mnt/secrets/db-username").read().strip()
    password = open("/mnt/secrets/db-password").read().strip()
    return user, password

The codes can be find at this path: https://github.com/mirecloud/home_lab/tree/a9dcfe68edf93221a77d94034adf917aee33f238/Dynamic-secret-vault

No environment variables. No Kubernetes Secrets. The Vault audit log records every issuance. From the outside, this looks airtight. 

It isn't.


Part 1 — Dynamic Secrets with Live Rotation

The first layer we added on top of the CSI mount is Vault's database secrets engine — combined with the CSI Driver's built-in secret rotation.

Together, they produce something remarkable: a credential that changes automatically every 5 minutes, without restarting the pod, without any code change in the application, without any human intervention.

How it works end-to-end

When the pod starts, the CSI Driver requests a fresh credential from Vault:

http
POST /v1/database/creds/n8n-dynamic-role

Vault connects to PostgreSQL as the admin account and executes in real time:

sql
CREATE ROLE "v-kubernet-dynamic--XrqklZNhLjkrgBfY4gq9-1778257150"
  WITH LOGIN
  PASSWORD 'S-1LxxxxxxxxxxxxxxxxxxxxX'
  VALID UNTIL '2026-05-08 16:24:19+00';

GRANT SELECT ON ALL TABLES IN SCHEMA public
  TO "v-kubernet-dynamic--XrqklZNhLjkrgBfY4gq9-1778257150";

A real PostgreSQL user, created in real time, with a hard expiration baked into the database itself. The credential is written to /mnt/secrets/ as a tmpfs mount — RAM only, scoped to the pod's mount namespace.

Before the TTL expires, the CSI Driver silently fetches a new credential from Vault, writes it atomically to the tmpfs, and drops the old PostgreSQL role. The application picks up the new value transparently on the next request — no restart, no downtime, no awareness of what just happened.

Pod lifetime (continuous, no restart)
├── t=0:00 CSI mounts → credential A (v-kubernet-...-XrqklZ, S-1L***)
├── t=4:50 CSI rotates → credential B (v-kubernet-...-knk9iB, zj8D***) ← atomic swap
├── t=9:50 CSI rotates → credential C (v-kubernet-...-mP2qT7, rK4F***)
│ ...
└── t=4h40 browser refresh → credential N ← same pod, dozens of rotations later

This is not theoretical. The two captures below were taken from the same running pod, hours apart, with nothing but a browser refresh between them.

Forensic proof from the live cluster

Browser refresh at 00:33:



Browser refresh at 00:36— same pod, no restart:


6  minutes. The same pod. Many credential rotations in between. Every single one of those intermediate credentials is already dead and dropped from PostgreSQL. The application never noticed.

bash
kubectl exec -n postgres postgres-0 -- \
  psql -U postgres -c "\du" | grep v-kubernet

# v-kubernet-dynamic--knk9iBKfTsC5NYDToIzC-1778273950 | valid until 2026-05-08 21:04:12
# (every previous identity has already been dropped)

The Vault configuration

bash
# 1. Activate the database secrets engine
vault secrets enable database

# 2. Connect Vault to PostgreSQL
vault write database/config/my-postgres \
  plugin_name=postgresql-database-plugin \
  allowed_roles="n8n-dynamic-role" \
  connection_url="postgresql://{{username}}:{{password}}@postgres.postgres.svc.cluster.local:5432/postgres?sslmode=disable" \
  username="postgres" \
  password="<POSTGRES_ADMIN_PASSWORD>"

# 3. Rotate the admin password immediately
#    After this, no human knows the PostgreSQL admin credential — only Vault does.
vault write -force database/config/my-postgres/rotate-root

# 4. Define the dynamic role
vault write database/roles/n8n-dynamic-role \
  db_name=my-postgres \
  creation_statements="
    CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
    GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";
  " \
  revocation_statements="
    REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM \"{{name}}\";
    DROP ROLE IF EXISTS \"{{name}}\";
  " \
  default_ttl="5m" \
  max_ttl="15m"

# 5. Policy and Kubernetes auth binding
vault policy write n8n-dynamic-policy - <<EOF
path "database/creds/n8n-dynamic-role" {
  capabilities = ["read"]
}
EOF

vault write auth/kubernetes/role/n8n-dynamic-role \
  bound_service_account_names=default \
  bound_service_account_namespaces=test-dynamic \
  policies=n8n-dynamic-policy \
  ttl=1h

The CSI Driver rotation is enabled at install time:

bash
helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \
  --namespace kube-system \
  --set syncSecret.enabled=false \
  --set enableSecretRotation=true    # ← this drives the automatic rotation
Why rotate-root matters: This single command changes the trust model permanently. The admin credential typed into the terminal is immediately replaced by one Vault generated internally. From this point on, the only entity that can authenticate to PostgreSQL as admin is Vault. There is no static credential to leak, rotate manually, or forget about.

What this means for an attacker

An attacker who exfiltrates a credential is holding something with a hard expiration — and the clock started when the pod mounted the volume, not when they read the file. In the worst case, they have 4 minutes and 59 seconds. On average, around 2 minutes.

That is a meaningful constraint. But it is not the end of the story.


Part 2 — The Gap That Rotation Cannot Close

Here is the problem that a 5-minute TTL does not solve.

If an attacker has a shell inside the pod — through any of the attack vectors named above — they can read the current credential before it rotates:

bash — attacker perspective
kubectl -n test-dynamic exec -it deployment/secure-test-app -- sh

$ cat /mnt/secrets/db-username
v-kubernet-dynamic--XrqklZNhLjkrgBfY4gq9-1778257150

$ cat /mnt/secrets/db-password
S-1LxxxxxxxxxxxxxxxxxxxxX

The CSI mount is readOnly: true. The filesystem is readOnlyRootFilesystem: true. No Secret object exists in etcd. Vault's audit log shows nothing unusual — the credential was legitimately issued to this pod.

And the attacker just read it. In plain text. In under a second.

Let's be precise about the damage window. With a 5-minute TTL, the attacker has at most 300 seconds with a working credential. That is enough time to:

  • Dump every row in the public schema
  • Exfiltrate data over an outbound HTTPS connection that blends with normal traffic
  • Enumerate other services reachable from PostgreSQL's network segment
  • Plant a trigger or stored procedure that persists beyond the credential's lifetime
The rotation didn't prevent the read. It only limits how long the stolen credential remains useful.

This is not a flaw in Vault. It is not a flaw in the CSI Driver. It is a correct description of where their responsibility ends.


The Boundary Problem

The Vault CSI Driver enforces at the pod level. It asks: "Is this pod authorized to receive this secret?" Once the answer is yes, the secret crosses into the pod's mount namespace — and the CSI Driver's job is done.

Inside the pod, there is no enforcement. Every process running as the container's user can read every file in the mount:

/mnt/secrets/db-password
├── python3 main.py → reads on every HTTP request ✓ legitimate
└── cat → reads during an intrusion ✗ malicious
(the filesystem cannot tell the difference)

Closing this gap requires enforcement at a layer that can distinguish which process is making a system call — not just which pod is running. A layer that operates below the container runtime, below any tool the attacker could modify or bypass.

That layer is the Linux kernel.

The tool that instruments it — without modifying the application, without adding a sidecar, and without any bypass available from user space — is Cilium Tetragon.


🔮 Coming in Part 2 of this article
  • Tetragon TracingPolicy — a kprobe on sys_openat that sends SIGKILL to any unauthorized process attempting to read /mnt/secrets/. The kernel terminates it before the read completes.
  • Falco — forensic audit capturing process name, pod identity, exact command, and container hash for every attempt — including the ones Tetragon already killed.
  • The combined resultcat /mnt/secrets/db-password exits 137 while the app continues serving fresh rotated credentials.

Reproduce this yourself.
All manifests — SecretProviderClass, Deployment, Vault configuration, and the Flask application — are available at github.com/mirecloud/home_lab.
Deploy with a single kubectl apply -f secure-app.yaml. Configure Vault in six commands.

Comments

Popular posts from this blog

FastAPI Instrumentalisation with prometheus and grafana Part1 [Counter]

welcome to this hands-on lab on API instrumentation using Prometheus and FastAPI! In the world of modern software development, real-time API monitoring is essential for understanding usage patterns, debugging issues, and ensuring optimal performance. In this lab, we’ll demonstrate how to enhance a FastAPI-based application with Prometheus metrics to monitor its behavior effectively. We’ve already set up the lab environment for you, complete with Grafana, Prometheus, and a PostgreSQL database. While FastAPI’s integration with databases is outside the scope of this lab, our focus will be entirely on instrumentation and monitoring. For those interested in exploring the database integration or testing , you can review the code in our repository: FastAPI Monitoring Repository . What You’ll Learn In this lab, we’ll walk you through: Setting up Prometheus metrics in a FastAPI application. Instrumenting API endpoints to track: Number of requests HTTP methods Request paths Using Grafana to vi...

Join Ubuntu 20.04 to Active Directory with SSSD and SSH Access

Join Ubuntu 20.04 to Active Directory with SSSD and SSH Access  Overview This guide walks you through joining an Ubuntu 20.04 machine to an Active Directory domain using SSSD, configuring PAM for AD user logins over SSH, and enabling automatic creation of home directories upon first login. We’ll also cover troubleshooting steps and verification commands. Environment Used Component Value Ubuntu Client       ubuntu-client.bazboutey.local Active Directory FQDN   bazboutey.local Realm (Kerberos)   BAZBOUTEY.LOCAL AD Admin Account   Administrator Step 1: Prerequisites and Package Installation 1.1 Update system and install required packages bash sudo apt update sudo apt install realmd sssd libnss-sss libpam-sss adcli \ samba-common-bin oddjob oddjob-mkhomedir packagekit \ libpam-modules openssh-server Step 2: Test DNS and Kerberos Configuration Ensure that the client can resolve the AD domain and discover services. 2.1 Test domain name resol...

ExternalDNS on Kubernetes - mirecloud homelab part 4

MireCloud Home Lab · DevOps ExternalDNS on Kubernetes Automatic Sync with BIND via RFC2136 How to fully automate DNS management in a bare-metal Kubernetes homelab using Cilium, BIND, and HashiCorp Vault. 📅 February 22, 2026 ⏱ ~10 min read 🔧 ExternalDNS v0.20.0 ☸ Kubernetes v1.34 Kubernetes v1.34 ExternalDNS v0.20.0 Cilium Gateway API BIND (RFC2136) TSIG / HMAC-SHA256 HashiCorp Vault External Secrets Operator ArgoCD (GitOps) cert-manager When you run a Kubernetes homelab with multiple exposed services — Grafana, Keycloak, ArgoCD, PgAdmin — you quickly find yourself maintaining DNS entries in BIND manually . It's repetitive, prone to errors, and breaks the GitOps flow. The solution is ExternalDNS . This controller monitors your Services, Ingresses, and HTTPRoutes in real-time, automatically pushing DNS updates to BIND as soon as a route is created. No mo...