Skip to main content

Kubernetes Secrets: Risks, Reality, and Secure Design - MireCloud Homelab Part 7

MireCloud Series · Part 7 · DevSecOps

Your Kubernetes Secrets
Are Not Safe.
Here's the Forensic Proof — and the Fix.

Why your Vault + ESO setup leaks plaintext credentials into etcd, and how to close the gap with Encryption at Rest and the Vault CSI Driver.

Zero Trust ⏱ ~15 min read ☸ Kubernetes v1.34 Reproducible lab
Kubernetes v1.34 HashiCorp Vault External Secrets Operator Secrets Store CSI Driver AES-CBC / AES-GCM / KMS v2 etcdctl ArgoCD Cilium Gateway API
▸ TL;DR

If you run HashiCorp Vault with External Secrets Operator on Kubernetes, your application secrets are stored as plaintext bytes in etcd. Any operator with kubectl get secrets, any backup of your control plane, or any disk that ever held an etcd snapshot can recover them in seconds.

This post proves it with a hexdump from a live cluster, then closes the gap with two layered controls:

  1. Encryption at Rest to protect against disk-level theft (Adversary A1)
  2. Vault CSI Driver to eliminate the Secret object entirely (Adversary A2)

Reproducible lab. Real bytes. No hand-waving.

The Question That Started This

A friend who runs his own homelab asked me a simple question:

"If someone gets access to your cluster, can they read your secrets?"

I said no. I use Vault. My credentials are in an audited, access-controlled secret store.

Then I thought about it for one more second.

The honest answer was: yes, they can.

Because the moment External Secrets Operator (ESO) writes a secret into Kubernetes — which is the entire point of ESO — that secret is no longer protected by Vault. It's protected by whatever encryption and RBAC posture the cluster has, which in most clusters is none and weak.

I was pulling secrets out of a vault and dumping them into an unlocked drawer.

This article documents the gap with forensic evidence from MireCloud's etcd, and the architecture that closes it.

Context for new readers: MireCloud is my bare-metal Kubernetes homelab. The Vault + ESO + cert-manager pipeline used throughout this article was built in Part 1 — I Was kubectl apply-ing Everything. Here's How I Stopped. — the GitOps foundation that every subsequent article builds on. If terms like ClusterSecretStore, vault-backend, or mirecloud-ca-issuer look unfamiliar, start there.

Part 1 — The Threat Model

Before discussing controls, name the threats.

AdversaryAccess VectorDefeated by
A1 — External attackerStolen disk, exfiltrated backup, compromised node filesystemEncryption at Rest
A2 — Insider with read RBACkubectl get secrets in one or more namespacesVault CSI Driver
A3 — Compromised workloadPod exec, sidecar, mounted ServiceAccount tokenVault dynamic secrets (Part 8)
A4 — Cluster adminFull API accessOut of scope (trusted root)

This article covers A1 and A2. A3 — what happens when the workload itself is compromised — gets its own article (Part 8), because it's a rich enough topic to deserve a dedicated treatment with Vault dynamic secrets, lease management, and runtime detection.

A4 is the trusted root. No technical control defeats a compromised cluster admin; that's an organizational problem, not a Kubernetes one.

▸ THREAT MODEL MireCloud cluster etcd api-server pods A1 — External Stolen disk Backup exfil A2 — Insider kubectl get secrets A3 — Workload Pod exec Sidecar abuse A4 — Admin Trusted root Defeated here Deferred to Part 8 Out of scope
Figure 1. Four adversaries, three in scope. A1 and A2 get fixed in this article. A3 is the subject of Part 8. A4 has no technical answer.

Part 2 — The Forensic Proof: Reading Plaintext from etcd

▸ BEFORE · PLAINTEXT LEAK TO ETCD Vault encrypted storage ESO External Secrets Op. K8s Secret base64 is not encryption etcd PLAINTEXT BYTES MireCloud2026! recovered via etcdctl Stolen disk / backup = full credential dump
Figure 2. The current data flow. The Secret is encrypted inside Vault, but the moment ESO writes it into Kubernetes, it lands in etcd as plaintext bytes.

Set up the lab namespace:

bash
kubectl create namespace test-secret

Create a secret the same way every MireCloud workload does — Keycloak admin password, GitLab database credential, PgAdmin admin password all follow this exact pattern:

bash
kubectl -n test-secret create secret generic my-db-password \
  --from-literal=password='MireCloud2026!'

Read it back through the API. The data is base64-encoded, which decodes in one line:

bash
kubectl -n test-secret get secret my-db-password -o yaml
# data.password: TWlyZUNsb3VkMjAyNiE=

echo 'TWlyZUNsb3VkMjAyNiE=' | base64 -d
# MireCloud2026!

That's already a problem for A2 — anyone with get secrets sees plaintext. But the real horror is what happens when we skip the API server entirely and read directly from etcd's storage backend.

The hexdump

On the control plane node, using the etcd client certificates that ship with every kubeadm-provisioned cluster:

bash
sudo ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  get /registry/secrets/test-secret/my-db-password | hexdump -C

Live output from MireCloud's node-4:

etcd raw bytes · plaintext
00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010  73 2f 74 65 73 74 2d 73  65 63 72 65 74 2f 6d 79  |s/test-secret/my|
00000020  2d 64 62 2d 70 61 73 73  77 6f 72 64 0a 6b 38 73  |-db-password.k8s|
00000030  00 0a 0c 0a 02 76 31 12  06 53 65 63 72 65 74 12  |.....v1..Secret.|
00000040  e4 01 0a bd 01 0a 0e 6d  79 2d 64 62 2d 70 61 73  |.......my-db-pas|
00000050  73 77 6f 72 64 12 00 1a  0b 74 65 73 74 2d 73 65  |sword....test-se|
00000060  63 72 65 74 22 00 2a 24  62 31 33 61 63 63 32 39  |cret".*$b13acc29|
00000070  2d 37 33 63 32 2d 34 39  61 33 2d 62 34 36 36 2d  |-73c2-49a3-b466-|
00000080  31 37 62 36 38 39 61 33  30 38 39 65 32 00 38 00  |17b689a3089e2.8.|
00000090  42 08 08 d0 d4 8f cf 06  10 00 8a 01 65 0a 0e 6b  |B...........e..k|
000000a0  75 62 65 63 74 6c 2d 63  72 65 61 74 65 12 06 55  |ubectl-create..U|
000000b0  70 64 61 74 65 1a 02 76  31 22 08 08 d0 d4 8f cf  |pdate..v1"......|
000000c0  06 10 00 32 08 46 69 65  6c 64 73 56 31 3a 31 0a  |...2.FieldsV1:1.|
000000d0  2f 7b 22 66 3a 64 61 74  61 22 3a 7b 22 2e 22 3a  |/{"f:data":{".":|
000000e0  7b 7d 2c 22 66 3a 70 61  73 73 77 6f 72 64 22 3a  |{},"f:password":|
000000f0  7b 7d 7d 2c 22 66 3a 74  79 70 65 22 3a 7b 7d 7d  |{}},"f:type":{}}|
00000100  42 00 12 1a 0a 08 70 61  73 73 77 6f 72 64 12 0e  |B.....password..|
00000110  4d 69 72 65 43 6c 6f 75  64 32 30 32 36 21 1a 06  |MireCloud2026!..|
00000120  4f 70 61 71 75 65 1a 00  22 00 0a                 |Opaque.."..|

What's visible

OffsetContentSensitivity
0x000x2B/registry/secrets/test-secret/my-db-passwordetcd key path
0x460x54my-db-passwordSecret name
0x1060x10EpasswordField name
0x1100x11DMireCloud2026!Password — plaintext ASCII
0x1220x127OpaqueSecret type

This is the industry default. Every Kubernetes cluster without explicit encryption-at-rest configuration stores every Secret this way. That includes every secret ESO writes for every workload in every namespace. In MireCloud prior to this hardening: Keycloak admin password, Grafana OIDC client secret, GitLab OmniAuth config, PgAdmin admin password, ExternalDNS TSIG key — all of them recoverable from a single etcdctl get.

Vault is not involved in this read. Vault's audit log will not record it. The attacker did not need to compromise Vault, did not need a valid Kubernetes token, did not even need to talk to the API server.


Part 3 — Closing Window #1: Encryption at Rest

The control

Kubernetes supports transparent encryption of Secret objects through an EncryptionConfiguration consumed by kube-apiserver. The apiserver encrypts on write, decrypts on read. API clients see no difference. The on-disk representation in etcd becomes opaque ciphertext.

▸ LAYER 1 · ENCRYPTION AT REST Vault source ESO operator api-server AES encrypt on write / decrypt on read etcd k8s:enc:aescbc:v1: opaque ciphertext ✓ A1 defeated stolen disk = useless ✗ A2 still wins kubectl get secrets gets plaintext via API decrypts on read
Figure 3. Encryption at rest closes the disk-theft window (A1). But any API client with RBAC gets the secret back in plaintext — the apiserver decrypts transparently. A2 remains open.

Provider selection (this matters)

The Kubernetes ecosystem supports several encryption providers. Three matter in 2026:

  • aescbc — AES-CBC with PKCS#7 padding. Well-supported historically, but deprecated since Kubernetes v1.28 in favor of AES-GCM. Don't use for new deployments.
  • aesgcm — AES-GCM, authenticated encryption. The right choice for local-key encryption. Requires disciplined key rotation.
  • kms v2 — Envelope encryption backed by an external KMS. The production-grade option. The MireCloud target is Vault Transit as the KMS provider — Vault, which already serves secrets to the cluster, becomes the root of trust for the cluster's own Secret encryption.

The output below uses aescbc because it produces the most legible hexdump for educational purposes. The conclusions hold identically for aesgcm.

Configuration

Generate the key:

bash
head -c 32 /dev/urandom | base64

Write the encryption config to the control plane node (/etc/kubernetes/enc/enc-config.yaml):

yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: <BASE64_ENCODED_32_BYTE_KEY>
      - identity: {}

The identity: {} provider at the end is a migration requirement, not a security hole. It allows the apiserver to read secrets that were written before encryption was enabled. Remove it once every existing secret has been re-encrypted.

Wire the flag into /etc/kubernetes/manifests/kube-apiserver.yaml:

yaml
- --encryption-provider-config=/etc/kubernetes/enc/enc-config.yaml

Add the corresponding hostPath mount. The kubelet detects the manifest change and restarts the apiserver automatically (~30 seconds).

Re-write existing secrets

Encryption applies on write. Existing secrets remain plaintext until touched. For the lab:

bash
kubectl -n test-secret delete secret my-db-password
kubectl -n test-secret create secret generic my-db-password \
  --from-literal=password='MireCloud2026!'

For production, force a rewrite of every Secret in the cluster:

bash
kubectl get secrets --all-namespaces -o json | kubectl replace -f -

Verification: same command, radically different bytes

etcd raw bytes · encrypted
00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010  73 2f 74 65 73 74 2d 73  65 63 72 65 74 2f 6d 79  |s/test-secret/my|
00000020  2d 64 62 2d 70 61 73 73  77 6f 72 64 0a 6b 38 73  |-db-password.k8s|
00000030  3a 65 6e 63 3a 61 65 73  63 62 63 3a 76 31 3a 6b  |:enc:aescbc:v1:k|
00000040  65 79 31 3a 6e df 01 f4  02 bf da ef de 8d c9 cb  |ey1:n...........|
00000050  ad f7 e0 36 80 28 17 42  48 62 2c ba 5f 09 43 88  |...6.(.BHb,._.C.|
00000060  c5 ff c7 59 33 c4 97 bb  bb f8 bf 74 04 8f ca 80  |...Y3......t....|
00000070  fc 74 fd 45 a9 0b b1 e4  2b f7 00 3c de fc 79 23  |.t.E....+..<..y#|
00000080  fb 51 91 21 6e d5 b7 22  1e c9 48 65 01 af f1 4d  |.Q.!n.."..He...M|
...

Three structural changes:

  1. The etcd key path is unchanged. Keys are not encrypted — they are the storage index. Secret names remain visible to anyone with etcd access. This is a property of the control, not a flaw.
  2. The provider marker k8s:enc:aescbc:v1:key1: appears. The apiserver uses this to identify which key decrypts the payload, enabling key rotation.
  3. The body is opaque ciphertext. No password, no Opaque, no MireCloud2026!, no kubectl-create. The entire serialized Secret is one encrypted blob.
Adversary A1 is defeated. A stolen disk, exfiltrated backup, or compromised filesystem yields ciphertext and a reference to a key the attacker does not hold.

What encryption at rest does NOT do

The apiserver still decrypts transparently on read. Anyone with get secrets RBAC still sees plaintext. Adversary A2 is unaffected.

This is the gap Part 4 closes.


Part 4 — Closing Window #2: The Vault CSI Driver

To defeat A2, we eliminate the Secret object entirely. The Secrets Store CSI Driver inverts the lifecycle: instead of an operator pushing secrets into etcd, the kubelet pulls the secret directly from Vault during pod creation and mounts it as a tmpfs inside the pod's mount namespace.

The secret never touches etcd. It exists only in volatile memory, only inside the pod that needs it.

▸ LAYER 2 · VAULT CSI DRIVER Vault source of truth CSI Driver secrets-store-csi + vault provider Pod /mnt/secrets (tmpfs) RAM only — no disk etcd no Secret object created kubectl get secrets → ∅ ✓ A1 + A2 both defeated Secret never touches etcd. No backup can leak it.
Figure 4. The CSI path bypasses etcd entirely. The kubelet pulls directly from Vault and mounts the secret as a tmpfs inside the pod's mount namespace — in RAM, scoped to the pod's lifetime.

The Helm installation trap

Installing the core driver is straightforward. Installing the HashiCorp Vault provider is where most engineers waste an afternoon — HashiCorp does not ship a standalone chart for it. You use the main vault chart with everything but the CSI sidecar disabled.

bash
# 1. The Kubernetes-SIGS CSI driver
helm repo add secrets-store-csi-driver \
  https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts

helm install csi-secrets-store \
  secrets-store-csi-driver/secrets-store-csi-driver \
  --namespace kube-system \
  --set syncSecret.enabled=false \
  --set enableSecretRotation=true

# 2. The HashiCorp Vault provider — same chart as the Vault server,
#    with everything except the CSI sidecar disabled
helm repo add hashicorp https://helm.releases.hashicorp.com

helm install vault-csi hashicorp/vault \
  --namespace kube-system \
  --set server.enabled=false \
  --set injector.enabled=false \
  --set csi.enabled=true

For MireCloud, both deployments are wrapped as ArgoCD Applications under infrastructure/secrets-store-csi/ and infrastructure/vault-csi-provider/ — same wrapper pattern as every other component since Part 1.

syncSecret.enabled=false is critical. With true, the CSI driver also creates a Kubernetes Secret mirror of the mounted file — defeating the entire point of bypassing etcd. Leave it off.

Define the SecretProviderClass

apps/test-secret/templates/secret-provider-class.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: vault-test-secret
  namespace: test-secret
spec:
  provider: vault
  parameters:
    vaultAddress: "http://vault.vault.svc.cluster.local:8200"
    roleName: "test-secret-role"
    objects: |
      - objectName: "app-password"
        secretPath: "secret/data/test/config"
        secretKey: "password"

This is the identical Vault service URL used by ESO in infrastructure/external-secrets-config/secret-store.yaml — consistency across the secret pipeline.

Deploy the workload

apps/test-secret/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: security-tester
  namespace: test-secret
spec:
  serviceAccountName: default
  containers:
    - name: alpine
      image: alpine:latest
      command: ["sh", "-c", "sleep infinity"]
      volumeMounts:
        - name: secrets-store-inline
          mountPath: "/mnt/secrets"
          readOnly: true
  volumes:
    - name: secrets-store-inline
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "vault-test-secret"

Part 5 — SRE Realities: Troubleshooting "Permission Denied"

In a real bare-metal deployment, this rarely works on the first apply. On MireCloud's first run, the pod hung in ContainerCreating. kubectl describe pod revealed:

kubectl describe pod · events
Warning  FailedMount  ...  MountVolume.SetUp failed for volume "secrets-store-inline":
failed to login: Error making API request.
URL: POST http://vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/login
Code: 400. Errors: * invalid role name "test-secret-role"

This is the Zero Trust architecture working exactly as designed. Cluster DNS routed the CSI driver to Vault. Vault rejected the mount because the specific Kubernetes role had not been authorized.

The fix is to create the policy and role inside Vault. MireCloud already has Kubernetes auth enabled for the vault-backend role used by ESO since Part 1 — adding a CSI role follows the same pattern:

bash
kubectl -n vault exec -ti vault-0 -- sh

vault policy write test-secret-policy - <<'EOF'
path "secret/data/test/config" {
  capabilities = ["read"]
}
EOF

vault write auth/kubernetes/role/test-secret-role \
  bound_service_account_names=default \
  bound_service_account_namespaces=test-secret \
  policies=test-secret-policy \
  ttl=1h

vault kv put secret/test/config password='MireCloud2026!'

The kubelet's exponential backoff loop unblocks within seconds. The pod transitions to Running.

Why this is a feature, not a bug. Vault refused a mount until a policy explicitly authorized it. That is the entire promise of identity-bound secret access. The error was not "the system is broken" — it was "the system is doing exactly what you asked it to."


Part 6 — Forensic Verification: The Application View

Querying the Kubernetes API yields nothing. Adversary A2 is blinded:

bash
kubectl -n test-secret get secret
# No resources found in test-secret namespace.

sudo ETCDCTL_API=3 etcdctl \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  get /registry/secrets/test-secret/app-password
# (no output — the key does not exist)

But drop into the pod itself, and the secret is there:

kubectl exec · inside the pod
kubectl -n test-secret exec -it security-tester -- /bin/sh

/ # ls -la /mnt/secrets/
drwxrwxrwt  3 root root  120 Apr 19 14:22 .
drwxr-xr-x  1 root root 4096 Apr 19 14:22 ..
drwxr-xr-x  2 root root   80 Apr 19 14:22 ..2026_04_19_14_22_18.2602020304
lrwxrwxrwx  1 root root   32 Apr 19 14:22 ..data -> ..2026_04_19_14_22_18.2602020304
lrwxrwxrwx  1 root root   19 Apr 19 14:22 app-password -> ..data/app-password

/ # cat /mnt/secrets/app-password
MireCloud2026!

/ # df -h /mnt/secrets
Filesystem    Size  Used Available Use% Mounted on
tmpfs         7.7G  4.0K      7.7G   0% /mnt/secrets

Two details worth pausing on:

1. The ..data/ symlink pattern

app-password is not a flat file — it is a symlink pointing at ..data/app-password, which itself points at a timestamped directory. This is the same atomic-update pattern the kubelet uses for ConfigMaps and projected volumes. When Vault rotates the secret, the CSI driver writes the new value to a new timestamped directory, then atomically swaps the ..data symlink. The application reading the file never sees a partial write, never sees a missing file, never has to handle a race condition.

2. tmpfs — RAM, not disk

The filesystem is tmpfs. The secret lives in RAM, scoped to the pod's mount namespace, for the lifetime of the pod. When the pod terminates, the memory is released. There is no disk artifact to forget about. No backup that could leak it. No etcd snapshot to extract it from.

Adversary A2 is defeated. The surface area that encryption at rest could not cover is now closed.

Part 7 — Decision Framework: ESO vs. CSI

Vault CSI is not a silver bullet. It introduces a hard runtime dependency: if Vault is unreachable, pods cannot start. ESO, by contrast, caches secrets in etcd, allowing pods to start during a Vault outage. This is a real availability tradeoff — particularly for MireCloud's current single-replica Vault deployment, which is acceptable for a homelab but not for production.

The framework I use:

ConsumerRecommended ControlRationale
Application pod reading a credential at startupVault CSICloses A1 + A2. No traces in etcd. Restart handles rotation.
Image pull secretsESO + Encryption at RestMust exist before pod creation; no pod identity yet.
Controllers consuming via API (cert-manager, GitLab OmniAuth)ESO + Encryption + RBACNo CSI support; residual A2 compensated by RBAC.
TLS keys for Cilium Gateway listenersESO + Encryption at RestConsumed by the gateway via the API, not by a pod filesystem.
Break-glass admin credentialsVault directly, out-of-bandNever sync to the cluster.

The principle: Secret objects are the exception, not the default. Every Secret requires explicit justification.


Closing

Hardening a cluster is not about finding one tool that solves everything. It is about layering controls until the cost of an attack exceeds the value of the prize.

By combining Encryption at Rest to neutralize physical infrastructure threats (A1) and the Secrets Store CSI Driver to eliminate internal API exposure (A2), the MireCloud architecture closes the gap between Vault and the workload.

The forensic evidence in Part 2 is the argument. Any engineer who has not personally extracted a plaintext secret from their own cluster's etcd should do so before accepting their current design as secure.

If you only take one thing from this article, take this:
run the etcdctl get | hexdump -C against your own cluster. Today. Before reading another article.

🔮 What's Next: Part 8 — Defeating the Compromised Workload

This article handled A1 and A2 — the two adversaries outside the workload. Part 8 tackles A3: what happens when the application pod itself is compromised.

CSI mounts protect the secret on the way to the pod, but they don't protect it once the attacker has shell access inside the pod. A static password is a static password — readable by anyone who can cat the file.

Part 8 will cover:

  • Vault dynamic secrets — the database secrets engine generating per-pod, TTL-bound PostgreSQL credentials revoked on lease expiry
  • Lease management and audit — proving in Vault's UI that the credential the attacker stole has already expired
  • Tetragon TracingPolicy — runtime detection of cat /mnt/secrets/* and kubectl exec patterns
  • NetworkPolicy lockdown — preventing exfiltration even when the pod is compromised

Follow me on Medium and LinkedIn to be notified when it publishes.

The complete repository, including the infrastructure/secrets-store-csi/ and infrastructure/vault-csi-provider/ ArgoCD Applications introduced in this post, is available at github.com/mirecloud/home_lab.

If this article saved you from an audit finding — or worse, an actual breach — the kindest thing you can do is share it with the engineer on your team who hasn't run that etcdctl command yet.

#Kubernetes #Vault #ExternalSecrets #SecretsStoreCSI #EncryptionAtRest #DevSecOps #ZeroTrust #PlatformEngineering #HomeLab #CiliumGateway #GitOps #CKS #SRE

Comments

Popular posts from this blog

FastAPI Instrumentalisation with prometheus and grafana Part1 [Counter]

welcome to this hands-on lab on API instrumentation using Prometheus and FastAPI! In the world of modern software development, real-time API monitoring is essential for understanding usage patterns, debugging issues, and ensuring optimal performance. In this lab, we’ll demonstrate how to enhance a FastAPI-based application with Prometheus metrics to monitor its behavior effectively. We’ve already set up the lab environment for you, complete with Grafana, Prometheus, and a PostgreSQL database. While FastAPI’s integration with databases is outside the scope of this lab, our focus will be entirely on instrumentation and monitoring. For those interested in exploring the database integration or testing , you can review the code in our repository: FastAPI Monitoring Repository . What You’ll Learn In this lab, we’ll walk you through: Setting up Prometheus metrics in a FastAPI application. Instrumenting API endpoints to track: Number of requests HTTP methods Request paths Using Grafana to vi...

Join Ubuntu 20.04 to Active Directory with SSSD and SSH Access

Join Ubuntu 20.04 to Active Directory with SSSD and SSH Access  Overview This guide walks you through joining an Ubuntu 20.04 machine to an Active Directory domain using SSSD, configuring PAM for AD user logins over SSH, and enabling automatic creation of home directories upon first login. We’ll also cover troubleshooting steps and verification commands. Environment Used Component Value Ubuntu Client       ubuntu-client.bazboutey.local Active Directory FQDN   bazboutey.local Realm (Kerberos)   BAZBOUTEY.LOCAL AD Admin Account   Administrator Step 1: Prerequisites and Package Installation 1.1 Update system and install required packages bash sudo apt update sudo apt install realmd sssd libnss-sss libpam-sss adcli \ samba-common-bin oddjob oddjob-mkhomedir packagekit \ libpam-modules openssh-server Step 2: Test DNS and Kerberos Configuration Ensure that the client can resolve the AD domain and discover services. 2.1 Test domain name resol...

Observability with grafana and prometheus (SSO configutation with active directory)

How to Set Up Grafana Single Sign-On (SSO) with Active Directory (AD) Grafana is a powerful tool for monitoring and visualizing data. Integrating it with Active Directory (AD) for Single Sign-On (SSO) can streamline access and enhance security. This tutorial will guide you through the process of configuring Grafana with AD for SSO. Prerequisites Active Directory Domain : Ensure you have an AD domain set up. Domain: bazboutey.local AD Server IP: 192.168.170.212 Users: grafana (for binding AD) user1 (to demonstrate SSO) we will end up with a pattern like this below Grafana Installed : Install Grafana on your server. Grafana Server IP: 192.168.179.185 Administrator Privileges : Access to modify AD settings and Grafana configurations. Step 1: Configure AD for LDAP Integration Create a Service Account in AD: Open Active Directory Users and Computers. Create a user (e.g., grafana ). Assign this user a strong password (e.g., Grafana 123$ ) and ensure it doesn’t expire. Gather Required AD D...