Series Recap: So at this point in the MireCloud build, I've got OIDC working. Every kubectl call is authenticated through Keycloak, users get their groups injected into the JWT, and RBAC bindings match on oidc:k8s-viewers exactly as expected. It feels good.
Then a thought hit me: I have no idea what's actually happening in this cluster.
I can control who's allowed to do things. But if someone lists Secrets, deletes a Pod, or pokes around somewhere they shouldn't—I have zero trace of it. To learn production patterns, I needed to close that gap. This post is about enabling Kubernetes Audit Logging and shipping those events into Loki.
A Quick "Why Loki?" Answer
I already run Loki for application logs. Promtail is already a DaemonSet. Grafana is my single pane of glass. Adding a dedicated SIEM or an ELK stack just for audit logs would mean more overhead and a second query language.
Loki fits naturally here. It’s low overhead, uses the same tooling, and LogQL is expressive enough for everything I need. It’s the right call for a high-performance homelab.
Architecture Overview
Simple and clean. No Kafka buffer, no extra sidecar. Promtail reads the audit log file straight from the host filesystem—the same way it reads container logs.
Step 1 — Writing the Audit Policy
Kubernetes gives you four verbosity levels for audit events: None, Metadata, Request, and RequestResponse. The temptation is to log everything at RequestResponse. Don't.
I tried that, and the noise from kubelet health checks backed up Loki's ingest almost immediately. Here is the surgical policy I landed on for /etc/kubernetes/audit-policy.yaml:
apiVersion: audit.k8s.io/v1
kind: Policy
omitStages:
- "RequestReceived" # Drop the pre-auth stage to halve log volume
rules:
# 1. Silence noisy system components (Kubelet, Kube-proxy)
- level: None
users: ["system:kubelet", "system:kube-proxy"]
# 2. Full RequestResponse for Secrets and ConfigMaps
# These hold credentials and OIDC secrets—they must be fully auditable.
- level: RequestResponse
resources:
- group: ""
resources: ["secrets", "configmaps"]
# 3. Metadata for everything else (Pods, RBAC, Deployments)
- level: Metadata
A Security Note on Secrets
Logging RequestResponse means secret values (like DB passwords) end up in audit.log in plaintext.
Loki access must be restricted: Anyone who can query the audit stream in Grafana can read these secrets.
Permissions: Keep
chmod 700on/var/log/kubernetes/audit/so only root can read it.
Step 2 — Create the Log Directory
sudo mkdir -p /var/log/kubernetes/audit
sudo chmod 700 /var/log/kubernetes/audit
Pro-tip: If this directory is missing when the API server starts, it will refuse to start. The error message from kubelet is vague, so don't let this missing folder eat your evening.
Step 3 — Patching the API Server Manifest
The kube-apiserver is a static Pod at /etc/kubernetes/manifests/kube-apiserver.yaml. Backup first: sudo cp ... kube-apiserver.yaml.bak.
A — Startup Flags & The "Colon" Gotcha
The colon (:) is a reserved character in YAML. If a flag ends with one—like --oidc-groups-prefix=oidc:—the parser thinks it's a key-value separator and drops the flag. Wrap them in double quotes.
- command:
- kube-apiserver
# ── OIDC / KEYCLOAK ───────────────────────────────────────
- --oidc-issuer-url=https://keycloak.mirecloud.com/auth/realms/mirecloud
- --oidc-client-id=kubernetes
- --oidc-username-claim=email
- "--oidc-username-prefix=oidc:"
- "--oidc-groups-claim=groups"
- "--oidc-groups-prefix=oidc:"
# ── AUDIT LOGGING ─────────────────────────────────────────
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
B — Volume Mounts & Host Volumes
We need to mount the host paths so the API Server container can write the logs:
# Inside volumeMounts
- mountPath: /etc/kubernetes/audit-policy.yaml
name: audit-policy
readOnly: true
- mountPath: /var/log/kubernetes/audit
name: audit-logs
# Inside volumes
- name: audit-policy
hostPath:
path: /etc/kubernetes/audit-policy.yaml
type: FileOrCreate
- name: audit-logs
hostPath:
path: /var/log/kubernetes/audit
type: DirectoryOrCreate
Step 4 — Wire Promtail to Loki
I added a scrape job to Promtail. Crucial detail: Don't label the username. In a real cluster, this creates high cardinality and kills Loki's performance. Use structured_metadata instead.
- job_name: kubernetes-audit
static_configs:
- targets: [localhost]
labels:
job: kubernetes-audit
cluster: mirecloud
__path__: /var/log/kubernetes/audit/audit.log
pipeline_stages:
- json:
expressions:
username: user.username
verb: verb
resource: objectRef.resource
- labels:
verb:
resource:
- structured_metadata:
username:
Step 5 — Does It Actually Work? (The Proof)
Yes. By running {job="kubernetes-audit"} | json | user_username !~ "system:.*", I can filter out the system noise and see real human activity.
What this validates:
user.username:dev-junior@mirecloud.comproves OIDC email claims are working.groups:oidc:k8s-viewersproves Keycloak group membership is flowing through.stage:ResponseCompleteproves my noise reduction policy is active.Coverage: My Promtail JSON pipeline is extracting fields with 100% reliability.
Useful LogQL Queries
Secret Access Trace:
{job="kubernetes-audit", resource="secrets"} | json | line_format "{{.user_username}} touched {{.objectRef_name}}"Unauthorized Access (403):
{job="kubernetes-audit"} | json | responseStatus_code =~ "40[13]"Pod Exec Tracking:
{job="kubernetes-audit"} | json | requestURI =~ ".*/exec.*"
MireCloud is a personal engineering project by Emmanuel-Steven. It represents a journey from bare-metal hardware to a fully hardened, enterprise-grade Kubernetes ecosystem. By documenting every struggle—from PEM encoding in PowerShell to Loki 429 ingestion limits—this series aims to bridge the gap between "it works on my machine" and "it's ready for production."
Get the YAML
All the configurations discussed in this post—including the Audit Policy, Promtail pipelines, and the Grafana values.yaml—are version-controlled and available in my GitHub repository.
GitHub Repo:
Comments
Post a Comment