Skip to main content

OIDC the Hard Way - Mirecloud Homelab Part 3

MireCloud Series · Homelab

OIDC the Hard Way: Integrating Grafana with Keycloak
(Part 3)

Eliminating password databases: OpenID Connect, front-channel vs. back-channel, role mapping, and the end of local authentication.

📅 Homelab Series ⏱ ~10 min read 🔧 Grafana & Keycloak OIDC ☸ Kubernetes
Kubernetes Grafana Keycloak OpenID Connect (OIDC) HashiCorp Vault Cilium Gateway API

Overview

Parts 1 and 2 built the foundation: Vault manages all credentials, External Secrets Operator bridges them into Kubernetes, cert-manager automates TLS, and Keycloak runs as a production-grade identity provider.


Part 3 is where that infrastructure proves its value: integrating Grafana with Keycloak via OpenID Connect to eliminate Grafana's native login form entirely. By the end, there is no Grafana password database. No local admin account. Every login redirects to Keycloak, authenticates against the central identity layer, and maps realm roles to Grafana permissions automatically.

The deliverables:

  • Understanding the OIDC Authorization Code Flow
  • Configuring Keycloak as an Identity Provider (IdP)
  • Pulling the Grafana OIDC secret securely via Vault
  • Configuring Grafana via Helm (Front-channel vs. Back-channel URLs)
  • Exposing Grafana via the Cilium Gateway API
  • ArgoCD deployment techniques for Prometheus CRDs

A Primer on OpenID Connect

Before diving into YAML, it is worth understanding what OpenID Connect actually does — because every configuration decision that follows is a direct consequence of how the protocol works.

The Authorization Code Flow

This is the flow used by Grafana when a user attempts to log in. Notice the strict separation between what happens in the user's browser vs. what happens securely inside the cluster:

[ 👤 User Browser ] [ 📊 Grafana (RP) ] [ 🔑 Keycloak (IdP) ] │ │ │ │── 1. GET / ───────────────▶│ │ │◀── 2. 302 Redirect ────────│ │ │ (with auth_url) │ │ │ │ │ FRONT │── 3. GET /auth/realms/... ─────────────────────────────▶│ CHANNEL │◀── 4. Login Form rendered ──────────────────────────────│ │── 5. POST Credentials ─────────────────────────────────▶│ │◀── 6. 302 Redirect ─────────────────────────────────────│ │ (with code=AUTH_CODE) │ │ │ │ │ │── 7. GET /login/...code= ─▶│ │ │ │ │ │ │── 8. POST /token ─────────▶│ BACK │ │◀── 9. tokens (jwt) ────────│ CHANNEL │ │ │ │ │── 10. GET /userinfo ──────▶│ (Internal │ │◀── 11. user claims ────────│ Network) │ │ │ │◀── 12. Set grafana_session │ │

Front-Channel vs. Back-Channel

The diagram reveals a critical distinction that most tutorials ignore:

Front-channel calls travel through the user's browser as HTTP redirects. The auth_url is a front-channel URL — the browser navigates to it directly. It must be publicly reachable: https://keycloak.mirecloud.com/...

Back-channel calls are made directly between Grafana's pod and Keycloak's pod, inside the Kubernetes cluster. The browser is not involved. These are the token exchange (token_url) and user info (api_url) calls.

💡 This is why token_url in the Grafana configuration uses the internal Kubernetes service DNS name (keycloak-keycloakx-http.keycloak.svc.cluster.local) rather than the public hostname. Doing so avoids unnecessary hairpin routing and DNS resolution issues within the cluster.
1
Configure Keycloak (One-Time Setup)

Create a Realm

Navigate to the Keycloak admin console → Create Realm.

  • Realm name: mirecloud

Create a Client for Grafana

Inside the mirecloud realm, navigate to ClientsCreate Client.

  • Client type: OpenID Connect
  • Client ID: grafana
  • Client authentication: ON
  • Valid redirect URIs: https://grafana.mirecloud.com/login/generic_oauth

Retrieve the Client Secret

Navigate to ClientsgrafanaCredentials tab. Copy the Client Secret and securely store it in Vault:

bash
kubectl -n vault exec -ti vault-0 -- vault kv put secret/grafana/sso \
  client_secret='<client-secret-from-keycloak-ui>'
2
ExternalSecret for Grafana

We use External Secrets Operator to securely pull the client secret from Vault into a Kubernetes Secret that Grafana can consume.

grafana-secret.yaml
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
  name: grafana-keycloak-es
  namespace: monitoring 
spec:
  refreshInterval: 1m
  secretStoreRef:
    name: vault-backend
    kind: ClusterSecretStore
  target:
    name: grafana-keycloak-secret 
  data:
  - secretKey: client_secret     
    remoteRef:
      key: secret/grafana/sso
      property: client_secret
3
The Prometheus-Stack Wrapper Chart

To deploy Grafana alongside Prometheus, we use the standard GitOps approach: a wrapper chart. This allows us to combine the official community chart with our own custom configurations.

Chart.yaml
apiVersion: v2
name: prometheus-stack
description: Wrapper chart for Prometheus Stack
type: application
version: 1.0.0
appVersion: "1.0.0"
dependencies:
  - name: kube-prometheus-stack
    version: 80.4.2
    repository: https://prometheus-community.github.io/helm-charts

Next, we inject the OIDC configuration specifically into the grafana.ini block of the values.yaml:

values.yaml
kube-prometheus-stack:
  grafana:
    enabled: true
    # Load the client_secret from our ESO-generated secret
    envFromSecret: grafana-keycloak-secret

    grafana.ini:
      server:
        domain: grafana.mirecloud.com
        root_url: "https://grafana.mirecloud.com"
        serve_from_sub_path: false

      auth:
        disable_login_form: false
        oauth_auto_login: false

      auth.generic_oauth:
        enabled: true
        name: "Keycloak"
        tls_skip_verify_insecure: true
        client_id: "grafana"
        client_secret: $__env{client_secret}

        # 1. PUBLIC URL (Front-channel) - Browser Redirect
        auth_url: "https://keycloak.mirecloud.com/auth/realms/mirecloud/protocol/openid-connect/auth"

        # 2. INTERNAL URLs (Back-channel) - Pod to Pod
        token_url: "http://keycloak-keycloakx-http.keycloak.svc.cluster.local:80/auth/realms/mirecloud/protocol/openid-connect/token"
        api_url: "http://keycloak-keycloakx-http.keycloak.svc.cluster.local:80/auth/realms/mirecloud/protocol/openid-connect/userinfo"

        scopes: "openid profile email"
        allow_sign_up: true
        # Map Keycloak 'admin' role to Grafana 'Admin'
        role_attribute_path: "contains(realm_access.roles[*], 'admin') && 'Admin' || 'Viewer'"

    # Disable standard ingress, we will use Cilium Gateway API
    ingress:
      enabled: false
4
Exposing Grafana with Cilium Gateway API

Since we disabled the default ingress in the values.yaml, we define our routing using the modern Gateway API standard. This handles TLS termination via cert-manager and L7 routing via Cilium.

gateway.yaml
# 1. Certificate Request
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: grafana-tls-cert
  namespace: monitoring
spec:
  secretName: grafana-tls-secret
  issuerRef:
    name: mirecloud-ca-issuer
    kind: ClusterIssuer
  commonName: grafana.mirecloud.com
  dnsNames:
  - grafana.mirecloud.com
---
# 2. Gateway Definition
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: grafana-gateway
  namespace: monitoring
spec:
  gatewayClassName: cilium
  listeners:
  - name: http
    protocol: HTTP
    port: 80
    allowedRoutes:
      namespaces:
        from: Same
  - name: https
    protocol: HTTPS
    port: 443
    tls:
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: grafana-tls-secret
    allowedRoutes:
      namespaces:
        from: Same
---
# 3. HTTPRoute to the Grafana Service
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: grafana-route
  namespace: monitoring
spec:
  parentRefs:
  - name: grafana-gateway
  hostnames:
  - "grafana.mirecloud.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: prometheus-stack-grafana 
      port: 80
5
Deploying with ArgoCD (The ServerSideApply Trick)

When deploying the kube-prometheus-stack via ArgoCD, the Prometheus Custom Resource Definitions (CRDs) are notoriously too large for standard kubectl apply annotations, often causing synchronization failures.

We solve this directly in our ArgoCD Application manifest by enabling ServerSideApply=true:

prometheus-stack-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: prometheus-stack
  namespace: argocd
spec:
  project: default
  source:
    repoURL: "git@github.com:mirecloud/home_lab.git"
    targetRevision: HEAD
    path: infrastructure/monitoring/prometheus-stack
  destination:
    server: "https://kubernetes.default.svc"
    namespace: monitoring
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - ServerSideApply=true  # Critical for Prometheus Stack CRDs

Test the OIDC Flow

  1. Navigate to https://grafana.mirecloud.com.
  2. Click Sign in with Keycloak.
  3. The browser redirects to Keycloak. Enter credentials for a user in the mirecloud realm.
  4. Grafana exchanges the authorization code for tokens (back-channel, invisible to you) and creates a session.
  5. You land on the Grafana dashboard. Your role (Admin or Viewer) is automatically determined by the admin realm role assignment via JMESPath mapping.

Security Posture Reached

  • No Grafana password database — all authentication delegated to Keycloak.
  • Client secret managed through Vault and ESO — never visible in Git.
  • OIDC tokens transmitted securely (TLS on front-channel, internal service mesh for back-channel).
  • Role assignment driven by Keycloak realm roles — access control changes do not require Grafana restarts.

What's Next: Part 4

Part 4 will cover automating DNS management with ExternalDNS, BIND9 (RFC2136), and HashiCorp Vault. Say goodbye to manually adding 'A' records every time you deploy a new Gateway API route!

The complete repository is available at github.com/mirecloud/home_lab.

Emmanuel Catin — Senior Platform Engineer | Kubernetes, GitOps, Zero Trust
CKA (90%) | CKS in preparation | Montréal, QC

#Kubernetes #OIDC #Keycloak #Grafana #SSO #GitOps #Vault #ExternalSecrets #GatewayAPI #Cilium #Prometheus #HomeLab #PlatformEngineering #ZeroTrust

Comments

Popular posts from this blog

FastAPI Instrumentalisation with prometheus and grafana Part1 [Counter]

welcome to this hands-on lab on API instrumentation using Prometheus and FastAPI! In the world of modern software development, real-time API monitoring is essential for understanding usage patterns, debugging issues, and ensuring optimal performance. In this lab, we’ll demonstrate how to enhance a FastAPI-based application with Prometheus metrics to monitor its behavior effectively. We’ve already set up the lab environment for you, complete with Grafana, Prometheus, and a PostgreSQL database. While FastAPI’s integration with databases is outside the scope of this lab, our focus will be entirely on instrumentation and monitoring. For those interested in exploring the database integration or testing , you can review the code in our repository: FastAPI Monitoring Repository . What You’ll Learn In this lab, we’ll walk you through: Setting up Prometheus metrics in a FastAPI application. Instrumenting API endpoints to track: Number of requests HTTP methods Request paths Using Grafana to vi...

Join Ubuntu 20.04 to Active Directory with SSSD and SSH Access

Join Ubuntu 20.04 to Active Directory with SSSD and SSH Access  Overview This guide walks you through joining an Ubuntu 20.04 machine to an Active Directory domain using SSSD, configuring PAM for AD user logins over SSH, and enabling automatic creation of home directories upon first login. We’ll also cover troubleshooting steps and verification commands. Environment Used Component Value Ubuntu Client       ubuntu-client.bazboutey.local Active Directory FQDN   bazboutey.local Realm (Kerberos)   BAZBOUTEY.LOCAL AD Admin Account   Administrator Step 1: Prerequisites and Package Installation 1.1 Update system and install required packages bash sudo apt update sudo apt install realmd sssd libnss-sss libpam-sss adcli \ samba-common-bin oddjob oddjob-mkhomedir packagekit \ libpam-modules openssh-server Step 2: Test DNS and Kerberos Configuration Ensure that the client can resolve the AD domain and discover services. 2.1 Test domain name resol...

Observability with grafana and prometheus (SSO configutation with active directory)

How to Set Up Grafana Single Sign-On (SSO) with Active Directory (AD) Grafana is a powerful tool for monitoring and visualizing data. Integrating it with Active Directory (AD) for Single Sign-On (SSO) can streamline access and enhance security. This tutorial will guide you through the process of configuring Grafana with AD for SSO. Prerequisites Active Directory Domain : Ensure you have an AD domain set up. Domain: bazboutey.local AD Server IP: 192.168.170.212 Users: grafana (for binding AD) user1 (to demonstrate SSO) we will end up with a pattern like this below Grafana Installed : Install Grafana on your server. Grafana Server IP: 192.168.179.185 Administrator Privileges : Access to modify AD settings and Grafana configurations. Step 1: Configure AD for LDAP Integration Create a Service Account in AD: Open Active Directory Users and Computers. Create a user (e.g., grafana ). Assign this user a strong password (e.g., Grafana 123$ ) and ensure it doesn’t expire. Gather Required AD D...