How I Replaced Kubernetes Static Credentials with Zero Trust OIDC — A Real Production Story
From admin.conf sprawl to Keycloak SSO, group-based RBAC, and centralized audit logging on bare metal. Every step, every trap, every fix.
1. The Problem with Static Credentials
If you have ever bootstrapped a Kubernetes cluster, you know the file. It lives at /etc/kubernetes/admin.conf and it contains a client certificate that grants its holder full cluster-admin access with no expiry, no identity, and no revocation mechanism. You copy it to ~/.kube/config, you maybe email it to a colleague, and suddenly three people are sharing the same all-powerful credential. This is what security engineers call a “loaded gun left on the kitchen table.”
The most insidious part is not the access itself — it is the invisibility. Open the Kubernetes audit log and you will see requests attributed to kubernetes-admin. Every person who ever touched that config file looks identical. Did a developer accidentally delete a production namespace? Was it an automated pipeline? Was it an attacker who grabbed the file from a misconfigured bucket six months ago? The audit log cannot tell you.
Real consequences I have seen in practice:
- Offboarding a contractor requires manually rotating cluster certificates, forcing control-plane restarts — an outage to revoke one person.
- Static kubeconfig files spread into repos, Docker images, CI secret stores, and laptops — each copy becomes its own attack surface.
- Compliance frameworks require named, revocable access. A shared cert fails those controls.
- If key material leaks, you won’t know until something breaks.
The solution is OpenID Connect (OIDC). Instead of distributing static certificates, every user authenticates to a central Identity Provider (Keycloak) and receives short-lived JSON Web Tokens (JWT). The API server validates tokens cryptographically and RBAC binds permissions to Keycloak groups. Revoking access becomes: remove a user from a Keycloak group — no certificate rotation, no downtime.
What follows is the exact implementation I built for the Mirecloud bare-metal cluster running Kubernetes v1.34.2, with Keycloak deployed via Helm and Cilium Gateway API as the ingress layer.
2. Architecture Overview
| Component | Role | Where it lives |
|---|---|---|
| Client Workstation | Runs kubectl + kubelogin to obtain tokens via browser login | Developer laptop (Windows / macOS / Linux) |
| Keycloak IDP | Identity provider; manages users/groups; issues signed JWTs | Inside cluster (Helm), exposed at keycloak.mirecloud.com |
| Control Plane (kube-apiserver) | Validates JWT via JWKS; enforces RBAC after authentication | Bare-metal control-plane node (node-4) |
| Observability Stack | Promtail ships audit logs to Loki; Grafana queries for investigations | Inside the Kubernetes cluster |
3. Authentication Flow
Here’s the OIDC Authorization Code flow as it happens when a developer runs kubectl get nodes after token expiry:
kubectl kubelogin Keycloak kube-apiserver
| | | |
|-- get nodes ---->| | |
| |-- open browser ->| |
| | localhost:8000 | |
| |-- user login --->| |
| |<- ------------------="" -----="" auth="" code="" exchange="" token="">|
| | | verify JWT |
|<-- -------="" decision="" pre="" rbac="" result="">
-->->- kubectl invokes exec plugin: kubeconfig uses an exec entry pointing to kubelogin.
- kubelogin starts localhost callback: local HTTP server on :8000 + PKCE.
- User authenticates to Keycloak: standard browser login (password/TOTP/WebAuthn, etc.).
- Authorization code redirect: Keycloak redirects back to http://localhost:8000/.
- Token exchange: kubelogin swaps code for an ID Token (JWT) and access token.
- Bearer token to API server: kubectl attaches token in Authorization header.
- kube-apiserver validates JWT: signature (JWKS), issuer, expiry, audience, claims.
- RBAC decision: groups claim maps to ClusterRoleBindings.
4. JWT Token Anatomy
A JWT is header.payload.signature. The payload claims are what Kubernetes evaluates for identity and RBAC.
| Part | Content | Used by kube-apiserver? |
|---|---|---|
| Header | Algorithm + kid (key ID) | Yes |
| Payload | Claims (issuer, audience, email, groups, exp, etc.) | Yes |
| Signature | RSA signature over header + payload | Yes |
{
"iss": "https://keycloak.mirecloud.com/auth/realms/mirecloud",
"sub": "user-uuid",
"email": "info@mirecloud.com",
"email_verified": true,
"groups": ["k8s-admins"],
"aud": "kubernetes",
"exp": 1740000000
}
This is the #1 silent failure. If email_verified is false, the API server rejects tokens with a bare 401 Unauthorized. You’ll only see the reason in apiserver logs.
5. Phase 1 — Keycloak Configuration
Step 1.1 — Create Realm: Create a realm named mirecloud. Issuer URL becomes:
https://keycloak.mirecloud.com/auth/realms/mirecloud
Step 1.2 — Create OIDC Client: Clients → Create client. Use:
| Field | Value | Why |
|---|---|---|
| Client type | OpenID Connect | Required |
| Client ID | kubernetes | Must match apiserver + kubelogin config |
| Client authentication | OFF | Public client (PKCE), no secret |
| Standard flow | ON | Authorization Code flow |
| Direct access grants | ON | Only for quick token testing |
| Valid redirect URIs | http://localhost:8000 http://localhost:18000 | kubelogin callbacks |
Step 1.3 — Create Users: set a valid email; set Email verified = ON; set password; set name fields for audit readability.
Step 1.4 — Create Groups: k8s-admins, k8s-developers, k8s-viewers. Add users to groups.
Step 1.5 — Group Membership Mapper: Clients → kubernetes → Client scopes → dedicated scope → Add mapper → Group Membership.
| Mapper Field | Value |
|---|---|
| Name | groups |
| Token Claim Name | groups |
| Full group path | OFF |
| Add to ID token | ON |
| Add to access token | ON |
6. Phase 2 — Kubernetes API Server Configuration
Edit the static Pod manifest: /etc/kubernetes/manifests/kube-apiserver.yaml (kubeadm). Backup first.
| Flag | Value | Purpose |
|---|---|---|
| --oidc-issuer-url | https://keycloak.mirecloud.com/auth/realms/mirecloud | Must match JWT iss exactly |
| --oidc-client-id | kubernetes | JWT audience must include this |
| --oidc-username-claim | Kubernetes username | |
| --oidc-username-prefix | oidc: | Avoid collisions |
| --oidc-groups-claim | groups | RBAC group extraction |
| --oidc-groups-prefix | oidc: | RBAC safety prefix |
| --oidc-ca-file | /etc/kubernetes/pki/keycloak-ca.crt | Trust Keycloak TLS (private CA) |
spec:
containers:
- name: kube-apiserver
command:
- kube-apiserver
- --oidc-issuer-url=https://keycloak.mirecloud.com/auth/realms/mirecloud
- --oidc-client-id=kubernetes
- --oidc-username-claim=email
- "--oidc-username-prefix=oidc:"
- --oidc-groups-claim=groups
- "--oidc-groups-prefix=oidc:"
- --oidc-ca-file=/etc/kubernetes/pki/keycloak-ca.crt
7. Phase 3 — RBAC Authorization
After authentication, Kubernetes runs normal RBAC. With --oidc-groups-prefix=oidc:, a Keycloak group k8s-admins becomes oidc:k8s-admins to RBAC.
kubectl create clusterrolebinding oidc-admins \ --clusterrole=cluster-admin \ --group=oidc:k8s-admins kubectl create clusterrolebinding oidc-developers \ --clusterrole=edit \ --group=oidc:k8s-developers kubectl create clusterrolebinding oidc-viewers \ --clusterrole=view \ --group=oidc:k8s-viewers
| Keycloak Group | RBAC Subject | ClusterRole | Allowed |
|---|---|---|---|
| k8s-viewers | oidc:k8s-viewers | view | read-only on most resources |
| k8s-developers | oidc:k8s-developers | edit | manage workloads; no cluster-wide RBAC control |
| k8s-admins | oidc:k8s-admins | cluster-admin | full access |
8. Phase 4 — Developer Workstation Setup
Developers need kubectl and the kubelogin plugin (v1.35.2 in this setup).
winget install int128.kubelogin
brew install int128/kubelogin/kubelogin
curl -LO https://github.com/int128/kubelogin/releases/download/v1.35.2/kubelogin_linux_amd64.zip unzip kubelogin_linux_amd64.zip sudo mv kubelogin /usr/local/bin/kubectl-oidc_login sudo chmod +x /usr/local/bin/kubectl-oidc_login
Configure kubeconfig with exec credentials:
kubectl config set-credentials oidc-user \ --exec-api-version=client.authentication.k8s.io/v1beta1 \ --exec-command=kubectl \ --exec-arg=oidc-login \ --exec-arg=get-token \ --exec-arg=--oidc-issuer-url=https://keycloak.mirecloud.com/auth/realms/mirecloud \ --exec-arg=--oidc-client-id=kubernetes \ --exec-arg=--oidc-extra-scope=email \ --exec-arg=--oidc-extra-scope=groups \ --exec-arg=--certificate-authority=/path/to/keycloak-ca.crt kubectl config set-context --current --user=oidc-user
kubectl auth whoami
kubectl oidc-login clean \ --oidc-issuer-url=https://keycloak.mirecloud.com/auth/realms/mirecloud \ --oidc-client-id=kubernetes
9. Phase 5
Host DNS requirement: kube-apiserver runs in host network namespace. If the host can’t resolve Keycloak, OIDC fails.
echo "192.168.2.204 keycloak.mirecloud.com" >> /etc/hosts nc -zv keycloak.mirecloud.com 443
In our example, we already configure the machines with the dns server 192.168.2.74
10. Phase 6 — Audit Logging & Grafana
With OIDC, audit logs now show a real identity (e.g. oidc:info@mirecloud.com) instead of kubernetes-admin. Ship logs to Loki via Promtail and query in Grafana.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
users:
- "oidc:*"
- level: Metadata
omitStages:
- RequestReceived
nonResourceURLs:
- "/api*"
- "/apis*"
- level: None
users:
- "system:kube-proxy"
- "system:kube-controller-manager"
- "system:kube-scheduler"
verbs:
- watch
pipeline_stages:
- json:
expressions:
user: user.username
verb: verb
resource: objectRef.resource
namespace: objectRef.namespace
name: objectRef.name
status: responseStatus.code
timestamp: requestReceivedTimestamp
- labels:
user:
verb:
resource:
namespace:
status:
- timestamp:
source: timestamp
format: RFC3339Nano
{job="k8s-audit"} | json | user="oidc:info@mirecloud.com"
{job="k8s-audit"} | json | verb=~"delete|patch|update" | user=~"oidc:.*"
{job="k8s-audit"} | json | user=~"oidc:.*" | status=~"4..|5.."
11. Troubleshooting Reference
| Symptom | Root Cause | Fix |
|---|---|---|
| lookup keycloak.mirecloud.com: no such host | Host DNS can’t resolve Keycloak; apiserver can’t use CoreDNS | Add /etc/hosts entry on nodes; verify with nc -zv |
| OIDC discovery fails / resource not found | --oidc-issuer-url doesn’t match discovery issuer exactly | Compare discovery issuer character-for-character |
| Redirect URI is not valid (Keycloak) | Redirect URIs entered incorrectly (one line, spaces) | Add each URI on its own line (press Enter) |
| unauthorized_client (Keycloak) | Client ID mismatch or Standard flow disabled | Fix client ID; enable Standard flow |
| state does not match | Old login link / wrong session / tunnel dropped | Clear cache; redo login; keep tunnel stable |
| Silent 401 Unauthorized | Inline YAML comment corrupted value OR CA not mounted | Move comments above; verify CA volume mount & file presence |
| email not verified in apiserver logs | Keycloak user has Email Verified OFF | Toggle ON; clear cache; re-login |
| xdg-open not found on headless server | No GUI browser installed | Use SSH tunnel and open localhost URL on workstation |
| Works once, then 401 later | Token expired; refresh token expired/revoked | Clear cache; login again; tune realm token lifetime if needed |
12. End-to-End Checklist
- Realm mirecloud created; issuer matches apiserver flag
- Client kubernetes: public (no secret), Standard Flow ON, redirect URIs correct
- Users have Email Verified = ON
- Users are in Keycloak groups (k8s-admins/k8s-developers/k8s-viewers)
- Group mapper configured; Full group path OFF; claim name groups
- JWT payload verified: email_verified=true and groups without leading slash
- Apiserver has all --oidc-* flags; no inline comments; colons quoted
- Keycloak CA present and mounted at /etc/kubernetes/pki/keycloak-ca.crt
- ClusterRoleBindings created for oidc:k8s-* groups
- kubectl auth whoami shows correct username + groups
- Audit log shipped to Loki; Grafana shows named user actions
Comments
Post a Comment