SSO the Hard Way: Deploying Keycloak on Bare-Metal Kubernetes
(Part 2)
Production-grade identity infrastructure: Vault secrets, clustered Keycloak, Gateway API, and zero credentials in Git.
Overview
Part 1 established the foundation: HashiCorp Vault as the single source of truth for credentials, External Secrets Operator bridging Vault into Kubernetes-native Secrets, cert-manager automating TLS certificate lifecycle, and ArgoCD deploying everything declaratively from Git.
Part 2 builds the identity layer on top of that foundation: Keycloak — an open-source identity and access management solution deployed as a production-grade, 2-replica cluster with PostgreSQL persistence, every credential sourced from Vault, and exposed via the Cilium Gateway API with automatic TLS.
By the end of this article, you will have:
- A highly available Keycloak cluster with distributed session state via Infinispan
- PostgreSQL backend for persistent storage
- Admin and database credentials managed entirely through Vault
- TLS certificates issued and renewed automatically by cert-manager
- External access via Cilium Gateway API with proper proxy header handling
- Zero secrets visible in Git — complete GitOps compliance
Figure 1: Complete deployment flow from Git to production Keycloak cluster
Why Keycloak?
Keycloak provides enterprise-grade identity and access management with support for:
- OpenID Connect (OIDC) and SAML 2.0 protocols
- User Federation with LDAP, Active Directory, Kerberos
- Social Login (Google, GitHub, etc.)
- Multi-factor Authentication (TOTP, WebAuthn)
- Fine-grained Authorization with role-based and attribute-based access control
- Centralized Session Management across multiple applications
In a homelab or small enterprise environment, Keycloak eliminates the need to manage separate user databases in every application. Every service delegates authentication to Keycloak. Add a user once, grant them access to multiple services through realm roles. Revoke access in one place when they leave.
Prerequisites
The following components must be operational before proceeding:
- Vault initialized, unsealed, and Kubernetes auth configured
ClusterSecretStorenamedvault-backendin Valid state- cert-manager operational with
ClusterIssuermirecloud-ca-issuerin Ready state - ArgoCD connected to the repository
Verify:
kubectl get clustersecretstore vault-backend
# NAME AGE STATUS CAPABILITIES READY
# vault-backend 2d Valid ReadWrite True
kubectl get clusterissuer mirecloud-ca-issuer
# NAME READY AGE
# mirecloud-ca-issuer True 2d
If either of these is not ready, return to Part 1.
Keycloak requires a relational database for persistent storage of realms, users, sessions, and configuration. The embedded H2 database is explicitly unsupported in clustered deployments and should never be used outside of local development.
PostgreSQL is deployed as a StatefulSet in a dedicated namespace:
helm install postgres oci://registry-1.docker.io/cloudpirates/postgres \
-n postgres --create-namespace
The chart generates a random password on first install and stores it in a Kubernetes Secret. This is the one time this credential is handled manually:
# Retrieve the generated password
kubectl -n postgres get secret postgres \
-o jsonpath='{.data.postgres-password}' | base64 -d
# Store it in Vault immediately
kubectl -n vault exec -ti vault-0 -- vault kv put secret/keycloak/db \
password='<retrieved-password>'
The internal service endpoint used throughout this deployment: postgres.postgres.svc:5432.
Every sensitive value is stored in Vault before any Kubernetes manifest is applied. This is the contract the entire pipeline depends on.
# Keycloak admin account
kubectl -n vault exec -ti vault-0 -- vault kv put secret/keycloak/admin \
password='StrongAdminPassword'
# Keycloak database credentials
kubectl -n vault exec -ti vault-0 -- vault kv put secret/keycloak/db \
password='DbPassword'
Verification:
kubectl -n vault exec -ti vault-0 -- vault kv list secret/keycloak
# Keys
# ----
# admin
# db
Two paths. Two credentials. Zero Git commits.
The ClusterSecretStore from Part 1 is already in place. Two ExternalSecret resources declare which Vault paths to sync and what Kubernetes Secret objects to create.
apps/keycloak/templates/external-secrets.yaml:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: keycloak-admin-es
namespace: keycloak
spec:
refreshInterval: 1m
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: keycloak-admin-password
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: secret/keycloak/admin
property: password
remoteRef.key path. ESO handles KV v2 path construction internally — it appends /data/ to the path when calling the Vault API. If you include /data/ in the key yourself, the resulting path becomes /data/data/secret/keycloak/admin, which returns a 404. The key should be the logical Vault path: secret/keycloak/admin.Second ExternalSecret for database credentials:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: keycloak-db-es
namespace: keycloak
spec:
refreshInterval: 1m
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: byo-db-creds
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: secret/keycloak/db
property: password
Verify:
kubectl get externalsecret -n keycloak
# NAME STATUS READY
# keycloak-admin-es SecretSynced True
# keycloak-db-es SecretSynced True
apps/keycloak/Chart.yaml:
apiVersion: v2
name: keycloak-wrapper
type: application
version: 1.0.0
dependencies:
- name: keycloakx
repository: "oci://ghcr.io/codecentric/helm-charts"
version: "7.1.5"
The upstream chart is declared as a dependency. ArgoCD deploys the wrapper. No direct helm install calls — everything is declarative.
apps/keycloak/values.yaml:
keycloakx:
command:
- "/opt/keycloak/bin/kc.sh"
- "start"
- "--http-enabled=true"
- "--http-port=8080"
- "--hostname-strict=false"
- "--proxy-headers=xforwarded"
replicas: 2
extraEnv: |
- name: KEYCLOAK_ADMIN
value: admin
- name: KEYCLOAK_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-admin-password
key: password
- name: KC_PROXY
value: edge
dbchecker:
enabled: true
database:
vendor: postgres
hostname: postgres.postgres.svc
port: 5432
database: postgres
username: postgres
existingSecret: byo-db-creds
service:
type: LoadBalancer
Configuration Rationale
--proxy-headers=xforwardedinstructs Keycloak to respectX-Forwarded-ForandX-Forwarded-Protoheaders injected by the upstream Gateway. Without this flag, Keycloak ignores the forwarded headers and constructs redirect URIs based on what it sees directly — which is plain HTTP on port 8080. The resulting OIDC redirect URIs usehttp://instead ofhttps://, causing the authorization code callback to fail at the browser.KC_PROXY=edgeis the complementary setting. It tells Keycloak it operates behind a TLS-terminating reverse proxy and should accept forwarded headers as authoritative. These two flags are paired — neither is sufficient without the other when TLS is terminated at the Gateway.dbchecker.enabled: trueadds an init container that waits for PostgreSQL to respond before Keycloak starts. Without it, a PostgreSQL restart during cluster initialization causes Keycloak to enter CrashLoopBackOff. The init container eliminates the race condition.existingSecret: byo-db-credsreferences the Secret created by ESO. The name must match thetarget.namein the ExternalSecret exactly. No password is written anywhere in this values file.
This deployment uses the Kubernetes Gateway API rather than the legacy Ingress resource. Gateway API provides cleaner separation between infrastructure concerns (Gateway, GatewayClass) and application routing concerns (HTTPRoute).
Three objects are required:
1. Certificate (apps/keycloak/templates/ingress.yaml):
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: keycloak-tls-cert
namespace: keycloak
spec:
secretName: keycloak-tls-secret
issuerRef:
name: mirecloud-ca-issuer
kind: ClusterIssuer
commonName: keycloak.mirecloud.com
dnsNames:
- keycloak.mirecloud.com
2. Gateway:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: mirecloud-gateway
namespace: keycloak
spec:
gatewayClassName: cilium
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: Same
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: keycloak-tls-secret
allowedRoutes:
namespaces:
from: Same
3. HTTPRoute:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: keycloak-route
namespace: keycloak
spec:
parentRefs:
- name: mirecloud-gateway
hostnames:
- "keycloak.mirecloud.com"
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: keycloak-keycloakx-http
port: 80
Object Responsibilities
| Object | Managed By | Responsibility |
|---|---|---|
Certificate | cert-manager | Issues and renews TLS certificate from internal CA |
Gateway | Cilium | Binds a LoadBalancer IP, terminates TLS, forwards HTTP internally |
HTTPRoute | Cilium | Maps keycloak.mirecloud.com to the Keycloak service |
The traffic path is: Client → HTTPS:443 (Cilium Gateway, IP 192.168.2.204) → HTTP:80 (keycloak-keycloakx-http) → HTTP:8080 (pod).
TLS is terminated at the Gateway. Keycloak receives plain HTTP, which is why --http-enabled=true and KC_PROXY=edge are required in the pod configuration.
Distributed Sessions: What Infinispan Provides
With replicas: 2, the keycloakx chart automatically configures Keycloak nodes to form a distributed Infinispan cache cluster. Pod discovery uses the headless Kubernetes service, which exposes individual pod addresses directly rather than load-balancing across them.
The clustering lifecycle is visible in the Keycloak logs:
ISPN100002: Starting rebalance with members
[keycloak-keycloakx-0-35812, keycloak-keycloakx-1-53189],
phase READ_OLD_WRITE_ALL, topology id 2
ISPN100010: Finished rebalance with members
[keycloak-keycloakx-0-35812, keycloak-keycloakx-1-53189],
topology id 5
This rebalance occurs for each cache (sessions, work, authenticationSessions, clientSessions, etc.) as the second pod joins. Once complete, both nodes share distributed session state. A user authenticated through pod-0 can have their request served by pod-1 without being prompted to log in again.
Security Posture
At the completion of this deployment:
- No credential appears in Git in any form — no base64, no Helm
--setflags, no inlinestringData - Vault is the authoritative source for every sensitive value in the cluster
- TLS is enforced on all external endpoints, certificates issued and renewed automatically by cert-manager
- ExternalSecrets refresh every 60 seconds — a rotated Vault secret propagates to Kubernetes within one minute
- Session state is distributed across Keycloak replicas — no single point of failure
- PostgreSQL credentials are isolated to the Keycloak namespace and managed through ESO
What’s Next: Part 3
Keycloak is now operational as a standalone identity server. The next step is integrating it with an actual application to provide SSO.
Part 3 will cover the complete OIDC integration with Grafana, including:
- The OpenID Connect Authorization Code Flow (with diagram)
- Front-channel vs. back-channel URL configuration
- Client secret management via Vault and ESO
- Role mapping from Keycloak realm roles to Grafana permissions
- Eliminating the Grafana native login form entirely
The complete repository is available at github.com/mirecloud/home_lab.
Emmanuel Catin — Senior Platform Engineer | Kubernetes, GitOps, Zero Trust
CKA | CKS in preparation | Montréal, QC
#Kubernetes #Keycloak #GitOps #Vault #ExternalSecrets #CiliumGateway #GatewayAPI #PostgreSQL #Infinispan #DevSecOps #HomeLab #PlatformEngineering #ZeroTrust
Comments
Post a Comment