Skip to content

How to Protect ANY Upstream Service with Operational Authentication

In this blog post, I will demonstrate how to use Ory Oathkeeper and Ory Kratos to protect upstream services behind authentication, especially the ones that do not have native authentication built-in, e.g., Prometheus, Hubble UI, Alertmanager, etc.

Introduction

Over the years of administering and maintaining production-grade systems at different companies, I have found myself in the situations where I needed to deploy internet-accessible services that may or may not provide built-in authentication.

These services are usually valuable assets and solutions to the current problems of the organization/platform. Having them exposed and accessible over the internet would benefit the employees and administrators a lot.

However, the downside is that not having a built-in authentication is a security risk. One that cannot and should not be overlooked.

As such, in the following article, I will share my method of protecting those critical and administrative level services to the public internet in a way that is only visible to the trusted eyes.

Prerequisites

The purpose of this blog post is not Kubernetes, however I find myself at ease deploying and configuring stuff on Kubernetes.

Additionally, this blog post will mainly focus on Ory services, specifically Ory Oathkeeper and Ory Kratos.

Getting to know those services and their inner workings is crucial for a better understanding of this blog post.

If you find yourself in need of a practical guide, you will find links at the bottom of this blog post useful.

Setting up the Environment

I will be deploying a K3d1 Kubernetes cluster on my machine, however, the ideas described here are applicable AND used in production (by myself).

k3d cluster create \
  --image rancher/k3s:v1.31.4-k3s1 \
  -p "8080:80@loadbalancer" \
  --agents 0 \
  --servers 1

This will be a locally accessible Kubernetes cluster. Notice the port-forwarding flag which will allow us to send load balanced requests to the cluster.

When this is ready, the following Ingress Class is available:

$ kubectl get ingressclass
NAME      CONTROLLER                      PARAMETERS   AGE
traefik   traefik.io/ingress-controller   <none>       1s

Deploy VictoriaMetrics K8s Stack

I admire the VictoriaMetrics family and all its branching products. I use almost all of its services, including the newly released VictoriaLogs2.

I will deploy their Kubernetes compatible stack using the following three commands:

helm repo add vm https://victoriametrics.github.io/helm-charts
helm repo update vm
helm install victoria-metrics-k8s-stack vm/victoria-metrics-k8s-stack --version=0.x

Now, checking on the deployed apps, I will see the followings Kubernetes Service resources.

$ kubectl get svc
NAME                                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
kubernetes                                             ClusterIP   10.43.0.1       <none>        443/TCP                      9m1s
victoria-metrics-k8s-stack-grafana                     ClusterIP   10.43.160.127   <none>        80/TCP                       4m46s
victoria-metrics-k8s-stack-kube-state-metrics          ClusterIP   10.43.166.238   <none>        8080/TCP                     4m46s
victoria-metrics-k8s-stack-prometheus-node-exporter    ClusterIP   10.43.189.100   <none>        9100/TCP                     4m46s
victoria-metrics-k8s-stack-victoria-metrics-operator   ClusterIP   10.43.242.71    <none>        8080/TCP,9443/TCP            4m46s
vmagent-victoria-metrics-k8s-stack                     ClusterIP   10.43.52.139    <none>        8429/TCP                     3m50s
vmalert-victoria-metrics-k8s-stack                     ClusterIP   10.43.216.20    <none>        8080/TCP                     3m46s
vmalertmanager-victoria-metrics-k8s-stack              ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   3m10s
vmsingle-victoria-metrics-k8s-stack                    ClusterIP   10.43.77.57     <none>        8429/TCP                     3m51s

Deploy Ory Kratos

This is where the fun begins. 😎

I aim to deploy Kratos with as minimal overhead as possible. I maintain my own Kustomization files for deploying some of the services, including Kratos3.

You will see shortly how easy it is to deploy Kratos, with only a custom Kratos configuration file!

Kratos Server Configuration

First things first, we need to create a config.yml file for the Kratos server.

This is regardless of how you plan to deploy the Kratos server, e.g., Docker Compose, bare CLI, Kubernetes, etc.

kratos-server-config.yml
cookies:
  domain: localhost.com
  path: /
  same_site: None
courier:
  smtp:
    connection_uri: smtps://test:test@mailslurper:1025/?skip_ssl_verify=true
    from_address: kratos@developer-friendly.blog
    from_name: Developer Friendly Blog
dsn: postgres://kratos:kratos@postgresql:5432/kratos?sslmode=disable
identity:
  default_schema_id: admin
  schemas:
    - id: admin
      url: https://gist.githubusercontent.com/meysam81/8bb993daa8ebfeb244ccc7008a1a8586/raw/dbf96f1b7d2780c417329af9e53b3fadcb449bb1/admin.schema.json
selfservice:
  allowed_return_urls:
    - http://*.localhost.com:8080
  default_browser_return_url: http://auth.localhost.com:8080
  flows:
    error:
      ui_url: http://auth.localhost.com:8080/error
    login:
      after:
        default_browser_return_url: http://auth.localhost.com:8080/sessions
        hooks:
          - hook: revoke_active_sessions
          - hook: require_verified_address
      ui_url: http://auth.localhost.com:8080/login
    logout:
      after:
        default_browser_return_url: http://auth.localhost.com:8080
    recovery:
      after:
        default_browser_return_url: http://auth.localhost.com:8080/login
        hooks:
          - hook: revoke_active_sessions
      enabled: true
      ui_url: http://auth.localhost.com:8080/recovery
      use: link
    registration:
      enabled: false
    settings:
      privileged_session_max_age: 15m
      required_aal: highest_available
      ui_url: http://auth.localhost.com:8080/settings
    verification:
      after:
        default_browser_return_url: http://auth.localhost.com:8080/login
      enabled: true
      ui_url: http://auth.localhost.com:8080/verification
      use: link
  methods:
    link:
      config:
        lifespan: 1h
      enabled: true
    oidc:
      config:
        providers:
          - client_id: SELFSERVICE_METHODS_OIDC_CONFIG_PROVIDERS_0_CLIENT_ID
            client_secret: SELFSERVICE_METHODS_OIDC_CONFIG_PROVIDERS_0_CLIENT_SECRET
            id: google
            label: Google
            mapper_url: https://gist.githubusercontent.com/meysam81/8bb993daa8ebfeb244ccc7008a1a8586/raw/2fb54e409e808bf901d06f10b51329f46a7e22af/google.jsonnet
            provider: google
            requested_claims:
              id_token:
                email:
                  essential: true
                email_verified:
                  essential: true
            scope:
              - email
              - profile
      enabled: true
    profile:
      enabled: true
    password:
      enabled: false
    webauthn:
      config:
        rp:
          id: localhost.com
          display_name: Developer Friendly Blog
          origins:
            - http://auth.localhost.com:8080
        passwordless: true
      enabled: true
    passkey:
      config:
        rp:
          display_name: Developer Friendly Blog
          id: localhost.com
          origins:
            - http://auth.localhost.com:8080
      enabled: true
    totp:
      enabled: true
serve:
  admin:
    port: 4434
  public:
    base_url: http://auth-server.localhost.com:8080/
    cors:
      allow_credentials: true
      allowed_headers:
        - Content-Type
      allowed_origins:
        - http://*.localhost.com
      debug: false
      enabled: true
    port: 4433
session:
  lifespan: 24h
  whoami:
    required_aal: highest_available

Notice that we intentionally disabled the registration because we are only going to allow Google Workspace email addresses to access our Kratos server using the SSO integration with Kratos server.

Consequently, one can enable Azure AD integration and only allow organization email addresses to access the Kratos server.

This is the crucial part of this blog post, where we restrict access to critical admin services to only the trusted users of our company.

Browser Cookie Domain

You might look at this configuration file, and the ones about to come, and wonder, "what's with the localhost.com domain?".

There is a discussion in a relevant Stackoverflow thread4 that covers the why and the how.

The short answer is that the modern browsers, for your own security perhaps, will not allow subdomain cookies from abc.localhost to xyz.localhost.

Since Ory Kratos heavily relies on Cookie authentication for any browser based application, that will break our setup and we will not make it through very long, sadly! 😞

As a result of the browser security measures, we will use localhost.com as the base domain for all our services.

That requires us to add the followings to our /etc/hosts file:

127.0.0.1 auth-server.localhost.com
127.0.0.1 auth.localhost.com
127.0.0.1 vmagent.localhost.com

Kratos Kustomization

You are more than welcome to pick Helm from the officially supported Helm chart5, however, I have found their Helm charts inflexible and very hard to maintain and customize! Examples include mounting secrets from External Secrets Operator, mounting a specific volume, etc.

That's the main reason I maintain my own security hardened Kustomization stack3 that is almost always one patch6 away from being exactly what you need it to be.

Let's create our Kratos Kustomization files.

kratos/ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kratos
spec:
  rules:
    - host: auth-server.localhost.com
      http:
        paths:
          - backend:
              service:
                name: kratos-public
                port:
                  number: 80
            path: /
            pathType: Prefix
kratos/kustomization.yml
configMapGenerator:
  - name: kratos-config
    files:
      - config.yml=kratos-server-config.yml
    behavior: replace

resources:
  - https://github.com/meysam81/kustomizations//kratos/overlays/default/?ref=v1.7.2
  - ingress.yml

patches:
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --dev
    target:
      kind: Deployment

namespace: default

Kratos SQL Database

There are a number of ways you can provide a SQL-backed database to the Ory Kratos server. In this blog post, I choose to deploy an in-cluster PostgreSQL using the Bitnami Helm Chart7.

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update bitnami
helm install postgresql bitnami/postgresql --version=16.x --set auth.username=kratos,auth.password=kratos,auth.database=kratos

Build and Apply Kratos Kustomization

At this point, we are ready to deploy the Kratos server with the provided configuration.

kustomize build ./kratos
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: kratos
    app.kubernetes.io/instance: kratos
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos
    app.kubernetes.io/part-of: kratos
    app.kubernetes.io/version: v1.0.0
  name: kratos
  namespace: default
---
apiVersion: v1
data:
  config.yml: |
    cookies:
      domain: localhost.com
      path: /
      same_site: None
    courier:
      smtp:
        connection_uri: smtps://test:test@mailslurper:1025/?skip_ssl_verify=true
        from_address: kratos@developer-friendly.blog
        from_name: Developer Friendly Blog
    dsn: postgres://kratos:kratos@postgresql:5432/kratos?sslmode=disable
    identity:
      default_schema_id: admin
      schemas:
        - id: admin
          url: https://gist.githubusercontent.com/meysam81/8bb993daa8ebfeb244ccc7008a1a8586/raw/dbf96f1b7d2780c417329af9e53b3fadcb449bb1/admin.schema.json
    selfservice:
      allowed_return_urls:
        - http://*.localhost.com:8080
      default_browser_return_url: http://auth.localhost.com:8080
      flows:
        error:
          ui_url: http://auth.localhost.com:8080/error
        login:
          after:
            default_browser_return_url: http://auth.localhost.com:8080/sessions
            hooks:
              - hook: revoke_active_sessions
              - hook: require_verified_address
          ui_url: http://auth.localhost.com:8080/login
        logout:
          after:
            default_browser_return_url: http://auth.localhost.com:8080
        recovery:
          after:
            default_browser_return_url: http://auth.localhost.com:8080/login
            hooks:
              - hook: revoke_active_sessions
          enabled: true
          ui_url: http://auth.localhost.com:8080/recovery
          use: link
        registration:
          enabled: false
        settings:
          privileged_session_max_age: 15m
          required_aal: highest_available
          ui_url: http://auth.localhost.com:8080/settings
        verification:
          after:
            default_browser_return_url: http://auth.localhost.com:8080/login
          enabled: true
          ui_url: http://auth.localhost.com:8080/verification
          use: link
      methods:
        link:
          config:
            lifespan: 1h
          enabled: true
        oidc:
          config:
            providers:
              - client_id: SELFSERVICE_METHODS_OIDC_CONFIG_PROVIDERS_0_CLIENT_ID
                client_secret: SELFSERVICE_METHODS_OIDC_CONFIG_PROVIDERS_0_CLIENT_SECRET
                id: google
                label: Google
                mapper_url: https://gist.githubusercontent.com/meysam81/8bb993daa8ebfeb244ccc7008a1a8586/raw/2fb54e409e808bf901d06f10b51329f46a7e22af/google.jsonnet
                provider: google
                requested_claims:
                  id_token:
                    email:
                      essential: true
                    email_verified:
                      essential: true
                scope:
                  - email
                  - profile
          enabled: true
        profile:
          enabled: true
        password:
          enabled: false
        webauthn:
          config:
            rp:
              id: localhost.com
              display_name: Developer Friendly Blog
              origins:
                - http://auth.localhost.com:8080
            passwordless: true
          enabled: true
        passkey:
          config:
            rp:
              display_name: Developer Friendly Blog
              id: localhost.com
              origins:
                - http://auth.localhost.com:8080
          enabled: true
        totp:
          enabled: true
    serve:
      admin:
        port: 4434
      public:
        base_url: http://auth-server.localhost.com:8080/
        cors:
          allow_credentials: true
          allowed_headers:
            - Content-Type
          allowed_origins:
            - http://*.localhost.com
          debug: false
          enabled: true
        port: 4433
    session:
      lifespan: 24h
      whoami:
        required_aal: highest_available
kind: ConfigMap
metadata:
  name: kratos-config-479k464thm
  namespace: default
---
apiVersion: v1
data:
  KRATOS_ADMIN_URL: http://localhost:4434
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: kratos
    app.kubernetes.io/instance: kratos
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos
    app.kubernetes.io/part-of: kratos
    app.kubernetes.io/version: v1.0.0
  name: kratos-envs-f5b9tfdm77
  namespace: default
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: kratos
    app.kubernetes.io/instance: kratos
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos
    app.kubernetes.io/part-of: kratos
    app.kubernetes.io/version: v1.0.0
    component: kratos-admin
  name: kratos-admin
  namespace: default
spec:
  ports:
  - name: http-admin
    port: 80
    protocol: TCP
    targetPort: http-admin
  selector:
    app.kubernetes.io/component: kratos
    app.kubernetes.io/instance: kratos
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos
    app.kubernetes.io/part-of: kratos
    app.kubernetes.io/version: v1.0.0
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: kratos
    app.kubernetes.io/instance: kratos
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos
    app.kubernetes.io/part-of: kratos
    app.kubernetes.io/version: v1.0.0
    component: kratos-courier
  name: kratos-courier
  namespace: default
spec:
  ports:
  - name: http-courier
    port: 80
    protocol: TCP
    targetPort: http-courier
  selector:
    app.kubernetes.io/component: kratos
    app.kubernetes.io/instance: kratos
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos
    app.kubernetes.io/part-of: kratos
    app.kubernetes.io/version: v1.0.0
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: kratos
    app.kubernetes.io/instance: kratos
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos
    app.kubernetes.io/part-of: kratos
    app.kubernetes.io/version: v1.0.0
  name: kratos-public
  namespace: default
spec:
  ports:
  - name: http-public
    port: 80
    protocol: TCP
    targetPort: http-public
  selector:
    app.kubernetes.io/component: kratos
    app.kubernetes.io/instance: kratos
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos
    app.kubernetes.io/part-of: kratos
    app.kubernetes.io/version: v1.0.0
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: kratos
    app.kubernetes.io/instance: kratos
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos
    app.kubernetes.io/part-of: kratos
    app.kubernetes.io/version: v1.0.0
  name: kratos
  namespace: default
spec:
  progressDeadlineSeconds: 3600
  replicas: 1
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      app.kubernetes.io/component: kratos
      app.kubernetes.io/instance: kratos
      app.kubernetes.io/managed-by: Kustomize
      app.kubernetes.io/name: kratos
      app.kubernetes.io/part-of: kratos
      app.kubernetes.io/version: v1.0.0
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: kratos
        app.kubernetes.io/instance: kratos
        app.kubernetes.io/managed-by: Kustomize
        app.kubernetes.io/name: kratos
        app.kubernetes.io/part-of: kratos
        app.kubernetes.io/version: v1.0.0
    spec:
      automountServiceAccountToken: false
      containers:
      - args:
        - serve
        - all
        - --config=/etc/kratos/config.yml
        - --dev
        command:
        - kratos
        envFrom:
        - configMapRef:
            name: kratos-envs-f5b9tfdm77
        image: oryd/kratos:v1.3.1-distroless
        lifecycle: {}
        livenessProbe:
          failureThreshold: 5
          httpGet:
            httpHeaders:
            - name: Host
              value: 127.0.0.1
            path: /health/ready
            port: http-admin
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: kratos
        ports:
        - containerPort: 4434
          name: http-admin
          protocol: TCP
        - containerPort: 4433
          name: http-public
          protocol: TCP
        readinessProbe:
          failureThreshold: 5
          httpGet:
            httpHeaders:
            - name: Host
              value: 127.0.0.1
            path: /health/ready
            port: http-admin
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 65534
          runAsNonRoot: true
          runAsUser: 65534
          seccompProfile:
            type: RuntimeDefault
        volumeMounts:
        - mountPath: /etc/kratos/config.yml
          name: kratos-config
          readOnly: true
          subPath: config.yml
      - args:
        - courier
        - watch
        - --expose-metrics-port=4435
        - --config=/etc/kratos/config.yml
        command:
        - kratos
        envFrom:
        - configMapRef:
            name: kratos-envs-f5b9tfdm77
        image: oryd/kratos:v1.3.1-distroless
        livenessProbe:
          failureThreshold: 5
          httpGet:
            httpHeaders:
            - name: Host
              value: 127.0.0.1
            path: /metrics/prometheus
            port: http-courier
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: kratos-courier
        ports:
        - containerPort: 4435
          name: http-courier
          protocol: TCP
        readinessProbe:
          failureThreshold: 5
          httpGet:
            httpHeaders:
            - name: Host
              value: 127.0.0.1
            path: /metrics/prometheus
            port: http-courier
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 65534
          runAsNonRoot: true
          runAsUser: 65534
        volumeMounts:
        - mountPath: /etc/kratos/config.yml
          name: kratos-config
          readOnly: true
          subPath: config.yml
      dnsPolicy: ClusterFirst
      initContainers:
      - args:
        - migrate
        - sql
        - -e
        - --yes
        - --config=/etc/kratos/config.yml
        command:
        - kratos
        envFrom:
        - configMapRef:
            name: kratos-envs-f5b9tfdm77
        image: oryd/kratos:v1.3.1-distroless
        name: kratos-automigrate
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 65534
          runAsNonRoot: true
          runAsUser: 65534
        volumeMounts:
        - mountPath: /etc/kratos/config.yml
          name: kratos-config
          readOnly: true
          subPath: config.yml
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 65534
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 65534
        runAsNonRoot: true
        runAsUser: 65534
        seccompProfile:
          type: RuntimeDefault
      serviceAccountName: kratos
      terminationGracePeriodSeconds: 300
      volumes:
      - configMap:
          defaultMode: 292
          items:
          - key: config.yml
            path: config.yml
          name: kratos-config-479k464thm
        name: kratos-config
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kratos
  namespace: default
spec:
  rules:
  - host: auth-server.localhost.com
    http:
      paths:
      - backend:
          service:
            name: kratos-public
            port:
              number: 80
        path: /
        pathType: Prefix
kubectl apply -k ./kratos
serviceaccount/kratos unchanged
configmap/kratos-config-57k2b7bctm unchanged
configmap/kratos-envs-f5b9tfdm77 unchanged
service/kratos-admin unchanged
service/kratos-courier unchanged
service/kratos-public unchanged
deployment.apps/kratos unchanged

We will wait for a bit, and after everything has landed successfully, here are the success of our efforts:

kubectl logs deploy/kratos -c kratos
time=2024-12-26T11:10:04Z level=info msg=[DEBUG] GET https://gist.githubusercontent.com/meysam81/8bb993daa8ebfeb244ccc7008a1a8586/raw/dbf96f1b7d2780c417329af9e53b3fadcb449bb1/admin.schema.json audience=application service_name=Ory Kratos service_version=v1.3.1
time=2024-12-26T11:10:05Z level=info msg=No tracer configured - skipping tracing setup audience=application service_name=Ory Kratos service_version=v1.3.1
time=2024-12-26T11:10:05Z level=warning msg=The config has no version specified. Add the version to improve your development experience. audience=application service_name=Ory Kratos service_version=v1.3.1
time=2024-12-26T11:10:05Z level=info msg=Software quality assurance features are enabled. Learn more at: https://www.ory.sh/docs/ecosystem/sqa audience=application service_name=Ory Kratos service_version=v1.3.1
time=2024-12-26T11:10:05Z level=info msg=TLS has not been configured for public, skipping audience=application service_name=Ory Kratos service_version=v1.3.1
time=2024-12-26T11:10:05Z level=info msg=TLS has not been configured for admin, skipping audience=application service_name=Ory Kratos service_version=v1.3.1
time=2024-12-26T11:10:05Z level=info msg=Starting the admin httpd on: 0.0.0.0:4434 audience=application service_name=Ory Kratos service_version=v1.3.1
time=2024-12-26T11:10:05Z level=info msg=Starting the public httpd on: 0.0.0.0:4433 audience=application service_name=Ory Kratos service_version=v1.3.1

Now, let's try if it's working:

$ curl -i http://auth-server.localhost.com:8080/health/ready
HTTP/1.1 200 OK
Content-Length: 16
Content-Type: application/json; charset=utf-8
Date: Thu, 26 Dec 2024 11:15:51 GMT
Vary: Origin

{"status":"ok"}

Deploy Ory Oathkeeper

We are half way there guys, hang in there. 🤗

Deploying Oathkeeper is a two-step process when it comes to Kubernetes.

We first need to deploy Oathkeeper Maester8, the Operator that converts Kubernetes CRDs to Access Rules9 for the Oathkeeper server.

Deploy Oathkeeper Maester

oathkeeper-maester/kustomization.yml
resources:
  - https://github.com/meysam81/kustomizations//oathkeeper-maester/overlays/default/?ref=v1.7.2

replacements:
  - source:
      kind: ServiceAccount
      fieldPath: metadata.namespace
    targets:
      - select:
          kind: ClusterRoleBinding
        fieldPaths:
          - subjects.[name=oathkeeper-maester].namespace

namespace: default
kubectl apply -k ./oathkeeper-maester
kustomize build ./oathkeeper-maester
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper-maester
    app.kubernetes.io/instance: oathkeeper-maester
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper-maester
    app.kubernetes.io/part-of: oathkeeper-maester
    app.kubernetes.io/version: v1.0.0
  name: rules.oathkeeper.ory.sh
spec:
  group: oathkeeper.ory.sh
  names:
    kind: Rule
    listKind: RuleList
    plural: rules
    singular: rule
  scope: Namespaced
  versions:
  - name: v1alpha1
    schema:
      openAPIV3Schema:
        description: Rule is the Schema for the rules API
        properties:
          apiVersion:
            description: |-
              APIVersion defines the versioned schema of this representation of an object.
              Servers should convert recognized schemas to the latest internal value, and
              may reject unrecognized values.
              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
            type: string
          kind:
            description: |-
              Kind is a string value representing the REST resource this object represents.
              Servers may infer this from the endpoint the client submits requests to.
              Cannot be updated.
              In CamelCase.
              More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
            type: string
          metadata:
            type: object
          spec:
            description: RuleSpec defines the desired state of Rule
            properties:
              authenticators:
                items:
                  description: Authenticator represents a handler that authenticates
                    provided credentials.
                  properties:
                    config:
                      description: Config configures the handler. Configuration keys
                        vary per handler.
                      type: object
                      x-kubernetes-preserve-unknown-fields: true
                    handler:
                      description: Name is the name of a handler
                      type: string
                  required:
                  - handler
                  type: object
                type: array
              authorizer:
                description: Authorizer represents a handler that authorizes the subject
                  ("user") from the previously validated credentials making the request.
                properties:
                  config:
                    description: Config configures the handler. Configuration keys
                      vary per handler.
                    type: object
                    x-kubernetes-preserve-unknown-fields: true
                  handler:
                    description: Name is the name of a handler
                    type: string
                required:
                - handler
                type: object
              configMapName:
                description: ConfigMapName points to the K8s ConfigMap that contains
                  these rules
                maxLength: 253
                minLength: 1
                pattern: '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'
                type: string
              errors:
                items:
                  description: Error represents a handler that is responsible for
                    executing logic when an error happens.
                  properties:
                    config:
                      description: Config configures the handler. Configuration keys
                        vary per handler.
                      type: object
                      x-kubernetes-preserve-unknown-fields: true
                    handler:
                      description: Name is the name of a handler
                      type: string
                  required:
                  - handler
                  type: object
                type: array
              match:
                description: Match defines the URL(s) that an access rule should match.
                properties:
                  methods:
                    description: Methods represent an array of HTTP methods (e.g.
                      GET, POST, PUT, DELETE, ...)
                    items:
                      type: string
                    type: array
                  url:
                    description: URL is the URL that should be matched. It supports
                      regex templates.
                    type: string
                required:
                - methods
                - url
                type: object
              mutators:
                items:
                  description: Mutator represents a handler that transforms the HTTP
                    request before forwarding it.
                  properties:
                    config:
                      description: Config configures the handler. Configuration keys
                        vary per handler.
                      type: object
                      x-kubernetes-preserve-unknown-fields: true
                    handler:
                      description: Name is the name of a handler
                      type: string
                  required:
                  - handler
                  type: object
                type: array
              upstream:
                description: Upstream represents the location of a server where requests
                  matching a rule should be forwarded to.
                properties:
                  preserveHost:
                    description: PreserveHost includes the host and port of the url
                      value if set to false. If true, the host and port of the ORY
                      Oathkeeper Proxy will be used instead.
                    type: boolean
                  stripPath:
                    description: StripPath replaces the provided path prefix when
                      forwarding the requested URL to the upstream URL.
                    type: string
                  url:
                    description: URL defines the target URL for incoming requests
                    maxLength: 256
                    minLength: 3
                    pattern: ^(?:https?:\/\/)?(?:[^@\/\n]+@)?(?:www\.)?([^:\/\n]+)
                    type: string
                required:
                - url
                type: object
            required:
            - match
            type: object
          status:
            description: RuleStatus defines the observed state of Rule
            properties:
              validation:
                description: Validation defines the validation state of Rule
                properties:
                  valid:
                    type: boolean
                  validationError:
                    type: string
                type: object
            type: object
        type: object
    served: true
    storage: true
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper-maester
    app.kubernetes.io/instance: oathkeeper-maester
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper-maester
    app.kubernetes.io/part-of: oathkeeper-maester
    app.kubernetes.io/version: v1.0.0
  name: oathkeeper-maester
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper-maester
    app.kubernetes.io/instance: oathkeeper-maester
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper-maester
    app.kubernetes.io/part-of: oathkeeper-maester
    app.kubernetes.io/version: v1.0.0
  name: oathkeeper-maester-role
rules:
- apiGroups:
  - oathkeeper.ory.sh
  resources:
  - rules
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
  - create
  - patch
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper-maester
    app.kubernetes.io/instance: oathkeeper-maester
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper-maester
    app.kubernetes.io/part-of: oathkeeper-maester
    app.kubernetes.io/version: v1.0.0
  name: oathkeeper-maester-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: oathkeeper-maester-role
subjects:
- kind: ServiceAccount
  name: oathkeeper-maester
  namespace: default
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper-maester
    app.kubernetes.io/instance: oathkeeper-maester
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper-maester
    app.kubernetes.io/part-of: oathkeeper-maester
    app.kubernetes.io/version: v1.0.0
  name: oathkeeper-maester-metrics
  namespace: default
spec:
  ports:
  - name: metrics
    port: 80
    protocol: TCP
    targetPort: metrics
  selector:
    app.kubernetes.io/component: oathkeeper-maester
    app.kubernetes.io/instance: oathkeeper-maester
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper-maester
    app.kubernetes.io/part-of: oathkeeper-maester
    app.kubernetes.io/version: v1.0.0
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper-maester
    app.kubernetes.io/instance: oathkeeper-maester
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper-maester
    app.kubernetes.io/part-of: oathkeeper-maester
    app.kubernetes.io/version: v1.0.0
  name: oathkeeper-maester
  namespace: default
spec:
  selector:
    matchLabels:
      app.kubernetes.io/component: oathkeeper-maester
      app.kubernetes.io/instance: oathkeeper-maester
      app.kubernetes.io/managed-by: Kustomize
      app.kubernetes.io/name: oathkeeper-maester
      app.kubernetes.io/part-of: oathkeeper-maester
      app.kubernetes.io/version: v1.0.0
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: oathkeeper-maester
        app.kubernetes.io/instance: oathkeeper-maester
        app.kubernetes.io/managed-by: Kustomize
        app.kubernetes.io/name: oathkeeper-maester
        app.kubernetes.io/part-of: oathkeeper-maester
        app.kubernetes.io/version: v1.0.0
    spec:
      containers:
      - args:
        - --metrics-addr=0.0.0.0:8080
        - controller
        - --rulesConfigmapName=oathkeeper-rules
        - --rulesConfigmapNamespace=$(POD_NAMESPACE)
        command:
        - /manager
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: oryd/oathkeeper-maester:v0.1.11
        livenessProbe:
          failureThreshold: 5
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: metrics
          timeoutSeconds: 1
        name: oathkeeper-maester
        ports:
        - containerPort: 8080
          name: metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 5
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: metrics
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 65534
          runAsNonRoot: true
          runAsUser: 65534
          seLinuxOptions:
            level: s0:c123,c456
          seccompProfile:
            type: RuntimeDefault
      initContainers:
      - command:
        - /bin/sh
        - -c
        - |
          set -eux

          cm=$(kubectl get configmap oathkeeper-rules -n $POD_NAMESPACE -o jsonpath='{.metadata.name}' 2>/dev/null || true)

          cat <<'EOF' > access-rules.json
          []
          EOF

          if [ -z "$cm" ]; then
            kubectl create configmap oathkeeper-rules --from-file=access-rules.json -n $POD_NAMESPACE
          else
            echo "ConfigMap/oathkeeper-rules already present"
          fi
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: bitnami/kubectl:1.32.0
        name: initial-rules
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 65534
          runAsNonRoot: true
          runAsUser: 65534
          seLinuxOptions:
            level: s0:c123,c456
          seccompProfile:
            type: RuntimeDefault
        volumeMounts:
        - mountPath: /tmp
          name: tmp
        workingDir: /tmp
      securityContext:
        fsGroup: 65534
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 65534
        runAsNonRoot: true
        runAsUser: 65534
        seccompProfile:
          type: RuntimeDefault
      serviceAccountName: oathkeeper-maester
      terminationGracePeriodSeconds: 120
      volumes:
      - emptyDir: {}
        name: tmp

Now, I know, I know, it's too much! Why the hell not just use the official Helm chart!?

By all means, if that works for you, go for it.

I just enjoy hacking way too much that I would like to admit. 🤓

Oathkeeper Configuration

The second part of the Oathkeeper story is of course the Oathkeeper server itself.

I will provide the configuration file, as it is the most crucial part of the deployment.

oathkeeper/oathkeeper-server-config.yml
access_rules:
  matching_strategy: regexp
  repositories:
    - file:///etc/rules/access-rules.json
authenticators:
  cookie_session:
    config:
      check_session_url: http://kratos-public/sessions/whoami
      extra_from: "@this"
      force_method: GET
      only:
        - ory_kratos_session
      preserve_path: true
      preserve_query: true
      subject_from: identity.id
    enabled: true
authorizers:
  allow:
    enabled: true
errors:
  fallback:
    - redirect
  handlers:
    redirect:
      config:
        return_to_query_param: return_to
        to: http://auth.localhost.com:8080/login
      enabled: true
mutators:
  header:
    config:
      headers:
        x-user-id: "{{ print .Subject }}"
    enabled: true
serve:
  api:
    port: 4456
  prometheus:
    port: 9000
  proxy:
    port: 4455
    timeout:
      read: 60m
      idle: 60m
      write: 60m
    cors:
      enabled: true
      allowed_headers:
        - accept
        - content-type
      allowed_methods:
        - GET
        - POST
        - PUT
        - DELETE
        - PATCH
      allowed_origins:
        - http://*.localhost.com
      allow_credentials: true
      debug: false

Notice the cookie_session configuration. There is where we instruct our Oathkeeper instance to query the Kratos server for availble authentication and session information.

That will result in either a 200 OK, as in the user is already logged in, or a 401 Unauthorized, as in the user needs to login and no available session is found.

Additionally, the allowed_origin is a crucial part of this configuration. Without it, your browser requests will be blocked due to the CORS policy.

In short, the server should respond with a list of allowed "origins", as in, the host domains that are allowed to access the server. Consequently, the browser will only send the requets to those server that have explicitly allowed the origin10.

Oathkeeper Kustomization

We have the most important part ready, it's time to deploy this bad boy!

oathkeeper/kustomization.yml
configMapGenerator:
  - behavior: replace
    files:
      - config.yml=oathkeeper-server-config.yml
    name: oathkeeper-config

resources:
  - https://github.com/meysam81/kustomizations//oathkeeper/overlays/default?ref=v1.7.2

patches:
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/volumeMounts/-
        value:
          name: oathkeeper-rules
          mountPath: /etc/rules
          readOnly: true
      - op: add
        path: /spec/template/spec/volumes/-
        value:
          name: oathkeeper-rules
          configMap:
            defaultMode: 0400
            items:
              - key: access-rules.json
                path: access-rules.json
            name: oathkeeper-rules
    target:
      kind: Deployment

namespace: default
kubectl apply -k ./oathkeeper
kustomize build ./oathkeeper
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper
    app.kubernetes.io/part-of: oathkeeper
    app.kubernetes.io/version: v1.0.0
  name: oathkeeper
  namespace: default
---
apiVersion: v1
data:
  config.yml: |
    access_rules:
      matching_strategy: regexp
      repositories:
        - file:///etc/rules/access-rules.json
    authenticators:
      cookie_session:
        config:
          check_session_url: http://kratos-public/sessions/whoami
          extra_from: "@this"
          force_method: GET
          only:
            - ory_kratos_session
          preserve_path: true
          preserve_query: true
          subject_from: identity.id
        enabled: true
    authorizers:
      allow:
        enabled: true
    errors:
      fallback:
        - redirect
      handlers:
        redirect:
          config:
            return_to_query_param: return_to
            to: http://auth.localhost.com:8080/login
          enabled: true
    mutators:
      header:
        config:
          headers:
            x-user-id: "{{ print .Subject }}"
        enabled: true
    serve:
      api:
        port: 4456
      prometheus:
        port: 9000
      proxy:
        port: 4455
        timeout:
          read: 60m
          idle: 60m
          write: 60m
        cors:
          enabled: true
          allowed_headers:
            - accept
            - content-type
          allowed_methods:
            - GET
            - POST
            - PUT
            - DELETE
            - PATCH
          allowed_origins:
            - http://*.localhost.com
          allow_credentials: true
          debug: false
kind: ConfigMap
metadata:
  name: oathkeeper-config-7k7mfkh66h
  namespace: default
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper
    app.kubernetes.io/part-of: oathkeeper
    app.kubernetes.io/version: v1.0.0
  name: oathkeeper-api
  namespace: default
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http-api
  selector:
    app.kubernetes.io/component: oathkeeper
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper
    app.kubernetes.io/part-of: oathkeeper
    app.kubernetes.io/version: v1.0.0
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper
    app.kubernetes.io/part-of: oathkeeper
    app.kubernetes.io/version: v1.0.0
    prometheus.io/instance: oathkeeper-metrics
  name: oathkeeper-metrics
  namespace: default
spec:
  ports:
  - name: http-metrics
    port: 80
    protocol: TCP
    targetPort: http-metrics
  selector:
    app.kubernetes.io/component: oathkeeper
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper
    app.kubernetes.io/part-of: oathkeeper
    app.kubernetes.io/version: v1.0.0
  type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper
    app.kubernetes.io/part-of: oathkeeper
    app.kubernetes.io/version: v1.0.0
  name: oathkeeper-proxy
  namespace: default
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http-proxy
  selector:
    app.kubernetes.io/component: oathkeeper
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper
    app.kubernetes.io/part-of: oathkeeper
    app.kubernetes.io/version: v1.0.0
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: oathkeeper
    app.kubernetes.io/instance: oathkeeper
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: oathkeeper
    app.kubernetes.io/part-of: oathkeeper
    app.kubernetes.io/version: v1.0.0
  name: oathkeeper
  namespace: default
spec:
  selector:
    matchLabels:
      app.kubernetes.io/component: oathkeeper
      app.kubernetes.io/instance: oathkeeper
      app.kubernetes.io/managed-by: Kustomize
      app.kubernetes.io/name: oathkeeper
      app.kubernetes.io/part-of: oathkeeper
      app.kubernetes.io/version: v1.0.0
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: oathkeeper
        app.kubernetes.io/instance: oathkeeper
        app.kubernetes.io/managed-by: Kustomize
        app.kubernetes.io/name: oathkeeper
        app.kubernetes.io/part-of: oathkeeper
        app.kubernetes.io/version: v1.0.0
    spec:
      containers:
      - args:
        - serve
        - --config=/etc/oathkeeper/config.yml
        command:
        - oathkeeper
        image: oryd/oathkeeper:v0.40.8-distroless
        livenessProbe:
          failureThreshold: 5
          httpGet:
            httpHeaders:
            - name: Host
              value: 127.0.0.1
            path: /health/alive
            port: http-api
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: oathkeeper
        ports:
        - containerPort: 4456
          name: http-api
          protocol: TCP
        - containerPort: 4455
          name: http-proxy
          protocol: TCP
        - containerPort: 9000
          name: http-metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health/alive
            port: http-api
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 100m
            memory: 128Mi
          requests:
            cpu: 100m
            memory: 128Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
          runAsGroup: 65534
          runAsNonRoot: true
          runAsUser: 65534
          seccompProfile:
            type: RuntimeDefault
        volumeMounts:
        - mountPath: /etc/oathkeeper/config.yml
          name: oathkeeper-config
          readOnly: true
          subPath: config.yml
        - mountPath: /etc/rules
          name: oathkeeper-rules
          readOnly: true
      securityContext:
        fsGroup: 65534
        fsGroupChangePolicy: Always
      serviceAccountName: oathkeeper
      terminationGracePeriodSeconds: 300
      volumes:
      - configMap:
          defaultMode: 292
          items:
          - key: config.yml
            path: config.yml
          name: oathkeeper-config-7k7mfkh66h
        name: oathkeeper-config
      - configMap:
          defaultMode: 256
          items:
          - key: access-rules.json
            path: access-rules.json
          name: oathkeeper-rules
        name: oathkeeper-rules

Believe it or not, all is ready now. 🥳

We can safely go ahead and expose our internal services behind the Ory authentication layer, and all thanks to operational configuration and system administration skills and no requirement for changing the codebase of the upstream services.

Imagine having to add your custom-built authantication to the VictoriaMetrics codebase. Good luck with that! 😅

Kratos Self-Service UI Node

Oh, I forgot to mention. 🤭

You seen that redirect URL in the Oathkeeper server configuration?

oathkeeper/oathkeeper-server-config.yml
errors:
  fallback:
    - redirect
  handlers:
    redirect:
      config:
        return_to_query_param: return_to
        to: http://auth.localhost.com:8080/login
      enabled: true

How about a similar configuration in the Kratos server configuration?

kratos/kratos-server-config.yml
    login:
      after:
        default_browser_return_url: http://auth.localhost.com:8080/sessions
        hooks:
          - hook: revoke_active_sessions
          - hook: require_verified_address
      ui_url: http://auth.localhost.com:8080/login

That also needs to be deployed; a frontend that can authenticate the user from the browser. Whatever the frontend may be, it needs to be able to talk to the Kratos public API and authenticate the user11.

What other better fit for the task than the UI created by the Ory team itself, officially maintained and provided as an opensource project12.

And yes, I also support the Kustomization for that sucker too. 😉

kratos-selfservice-ui-node/ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kratos-selfservice-ui-node
spec:
  rules:
    - host: auth.localhost.com
      http:
        paths:
          - backend:
              service:
                name: kratos-selfservice-ui-node
                port:
                  name: http
            path: /
            pathType: Prefix
kratos-selfservice-ui-node/kustomization.yml
configMapGenerator:
  - literals:
      - COOKIE_SECRET=ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
      - CSRF_COOKIE_NAME=ory_kratos_session
      - CSRF_COOKIE_SECRET=ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
      - KRATOS_ADMIN_URL=http://kratos-admin
      - KRATOS_BROWSER_URL=http://auth-server.localhost.com:8080
      - KRATOS_PUBLIC_URL=http://kratos-public
    name: kratos-selfservice-ui-node-envs
    behavior: replace

resources:
  - github.com/meysam81/kustomizations//kratos-selfservice-ui-node/overlays/default?ref=v1.7.2
  - ingress.yml

namespace: default
kubectl apply -k ./kratos-selfservice-ui-node
kustomize build ./kratos-selfservice-ui-node
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: kratos-selfservice-ui-node
    app.kubernetes.io/instance: kratos-selfservice-ui-node
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos-selfservice-ui-node
    app.kubernetes.io/part-of: kratos-selfservice-ui-node
    app.kubernetes.io/version: v1.0.0
  name: kratos-selfservice-ui-node
  namespace: default
---
apiVersion: v1
data:
  COOKIE_SECRET: ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
  CSRF_COOKIE_NAME: ory_kratos_session
  CSRF_COOKIE_SECRET: ABCDEFGHIJKLMNOPQRSTUVWXYZ123456
  KRATOS_ADMIN_URL: http://kratos-admin
  KRATOS_BROWSER_URL: http://auth-server.localhost.com:8080
  KRATOS_PUBLIC_URL: http://kratos-public
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: kratos-selfservice-ui-node
    app.kubernetes.io/instance: kratos-selfservice-ui-node
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos-selfservice-ui-node
    app.kubernetes.io/part-of: kratos-selfservice-ui-node
    app.kubernetes.io/version: v1.0.0
  name: kratos-selfservice-ui-node-envs-884fh65k6h
  namespace: default
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: kratos-selfservice-ui-node
    app.kubernetes.io/instance: kratos-selfservice-ui-node
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos-selfservice-ui-node
    app.kubernetes.io/part-of: kratos-selfservice-ui-node
    app.kubernetes.io/version: v1.0.0
  name: kratos-selfservice-ui-node
  namespace: default
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app.kubernetes.io/component: kratos-selfservice-ui-node
    app.kubernetes.io/instance: kratos-selfservice-ui-node
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos-selfservice-ui-node
    app.kubernetes.io/part-of: kratos-selfservice-ui-node
    app.kubernetes.io/version: v1.0.0
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: kratos-selfservice-ui-node
    app.kubernetes.io/instance: kratos-selfservice-ui-node
    app.kubernetes.io/managed-by: Kustomize
    app.kubernetes.io/name: kratos-selfservice-ui-node
    app.kubernetes.io/part-of: kratos-selfservice-ui-node
    app.kubernetes.io/version: v1.0.0
  name: kratos-selfservice-ui-node
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/component: kratos-selfservice-ui-node
      app.kubernetes.io/instance: kratos-selfservice-ui-node
      app.kubernetes.io/managed-by: Kustomize
      app.kubernetes.io/name: kratos-selfservice-ui-node
      app.kubernetes.io/part-of: kratos-selfservice-ui-node
      app.kubernetes.io/version: v1.0.0
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: kratos-selfservice-ui-node
        app.kubernetes.io/instance: kratos-selfservice-ui-node
        app.kubernetes.io/managed-by: Kustomize
        app.kubernetes.io/name: kratos-selfservice-ui-node
        app.kubernetes.io/part-of: kratos-selfservice-ui-node
        app.kubernetes.io/version: v1.0.0
    spec:
      containers:
      - envFrom:
        - configMapRef:
            name: kratos-selfservice-ui-node-envs-884fh65k6h
        image: oryd/kratos-selfservice-ui-node:v1.3.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health/alive
            port: http
          initialDelaySeconds: 3
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: kratos-selfservice-ui-node
        ports:
        - containerPort: 3000
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health/ready
            port: http
          initialDelaySeconds: 3
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          privileged: false
          readOnlyRootFilesystem: true
          runAsGroup: 65534
          runAsNonRoot: true
          runAsUser: 65534
          seLinuxOptions:
            level: s0:c123,c456
          seccompProfile:
            type: RuntimeDefault
        volumeMounts:
        - mountPath: /home/ory
          name: tmp
        - mountPath: /.npm
          name: tmp
      securityContext:
        fsGroup: 65534
        fsGroupChangePolicy: OnRootMismatch
        runAsGroup: 65534
        runAsNonRoot: true
        runAsUser: 65534
        seccompProfile:
          type: RuntimeDefault
      serviceAccountName: kratos-selfservice-ui-node
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir: {}
        name: tmp
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kratos-selfservice-ui-node
  namespace: default
spec:
  rules:
  - host: auth.localhost.com
    http:
      paths:
      - backend:
          service:
            name: kratos-selfservice-ui-node
            port:
              name: http
        path: /
        pathType: Prefix

Protecting Unauthenticated Services

Let's go ahead and create a Rule and Ingress resource to make sure our setup is solid. 💪

protected-endpoints/vmagent.yml
apiVersion: v1
kind: List
items:
  - apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: vmagent
      namespace: default
    spec:
      rules:
        - host: vmagent.localhost.com
          http:
            paths:
              - backend:
                  service:
                    name: oathkeeper-proxy
                    port:
                      name: http
                path: /
                pathType: Prefix
  - apiVersion: oathkeeper.ory.sh/v1alpha1
    kind: Rule
    metadata:
      name: vmagent
      namespace: default
    spec:
      authenticators:
        - handler: cookie_session
      authorizer:
        handler: allow
      errors:
        - handler: redirect
      match:
        methods:
          - GET
          - POST
          - PUT
          - DELETE
          - PATCH
        url: http://vmagent.localhost</?.*>
      mutators:
        - handler: header
      upstream:
        preserveHost: true
        url: http://vmagent-victoria-metrics-k8s-stack:8429

Notice that by specifying the authenticator to be the cookie_session, and by not customizing and overriding the configuration values in our Rule resource, we are using the default configuration as specified in the Oathkeeper server configuration section above.

Applying this resource, and we'll be able to verify our setup.

kubectl apply -f ./protected-endpoints/

It takes a while for Ory Oathkeeper to get notified about the changes to the Access Rule, but eventually, the following logs should be visible in deploy/oathkeeper:

kubectl logs deploy/oathkeeper -c oathkeeper
time=2024-12-26T12:31:19Z level=info msg=Detected access rule repository change, processing updates. audience=application repos=[file:///etc/rules/access-rules.json] service_name=ORY Oathkeeper service_version=v0.40.8
time=2024-12-26T12:31:19Z level=info msg=Detected file change for access rules. Triggering a reload. audience=application event=fsnotify file=/etc/rules/access-rules.json service_name=ORY Oathkeeper service_version=v0.40.8

Let's open our browser and navigate to the newly created address to see if we hit the expected authentication layer.

http://vmagent.localhost.com:8080

And, the result is unsurprisingly a 302 redirect to the Kratos Self-Service UI, after which we need to login and be able to access the upstream service.

Login Page
Login Page

If we try to register a new account, the result is, as expected, not allowed:

{
  "id": "c9d90e17-c1cd-4164-b75b-b1f9a4e070d2",
  "error": {
    "id": "self_service_flow_disabled",
    "code": 400,
    "reason": "Registration is not allowed because it was disabled.",
    "status": "Bad Request",
    "message": "registration flow disabled"
  },
  "created_at": "2024-12-26T13:14:02.646882Z",
  "updated_at": "2024-12-26T13:14:02.646882Z"
}

That concludes the main objective of this post. 🎉 💃

Google Social Sign-In

Before we close this off, there is one last bonus topic I find suiting to discuss here.

You have seen the Kratos server configuration holding a oidc.config.providers with an entry for google.

kratos/kratos-server-config.yml
    oidc:
      config:
        providers:
          - client_id: SELFSERVICE_METHODS_OIDC_CONFIG_PROVIDERS_0_CLIENT_ID
            client_secret: SELFSERVICE_METHODS_OIDC_CONFIG_PROVIDERS_0_CLIENT_SECRET
            id: google
            label: Google
            mapper_url: https://gist.githubusercontent.com/meysam81/8bb993daa8ebfeb244ccc7008a1a8586/raw/2fb54e409e808bf901d06f10b51329f46a7e22af/google.jsonnet
            provider: google
            requested_claims:
              id_token:
                email:
                  essential: true
                email_verified:
                  essential: true
            scope:
              - email
              - profile
      enabled: true

That requires you to create a Google OAuth2.0 Client ID and Secret13, and provide them to the Kratos server, either as environment variables (e.g. using External Secrets Operator), or by passing them to the configuration file (not recommended)14.

Below, you will find the screenshots on how to create a OAuth2.0 Client.

First, head over to the Google Cloud Console at https://console.cloud.google.com.

Google Cloud Console New Project
Google Cloud Console New Project

Create a new project and name it as you see fit.

Project Name
Project Name

Confusingly enough, just creating the project doesn't select it for you, unless it's your first project. So, make sure to pick the project from the top-left.

Once you do, head over to the APIs & Services section and then Credentials. I always search for "cred" at the top search bar and get to it in an instant.

Console Search Bar
Console Search Bar
Credentials Page
Credentials Page

You will first have to "Configure Consent Screen" to provide the necessary information about your application.

After that, you can create a new OAuth 2.0 Client ID.

Google Workspace Account

Bear in mind that the setup provided in this guide works only for Google Workspace accounts and restricting the users to the domain is only applicable to those accounts.

For personal accounts, you will have to either manually add "test users" to your trusted list or open it to the public; this beats the whole purpose of gating your services behind authentication! 😖

Once you created the Oauth 2.0 credentials, provide them to the cluster with any secret management setup of your choice.

kubectl create secret generic kratos-google-oauth2-credentials \
  --from-literal=SELFSERVICE_METHODS_OIDC_CONFIG_PROVIDERS_0_CLIENT_ID=YOUR_CLIENT_ID \
  --from-literal=SELFSERVICE_METHODS_OIDC_CONFIG_PROVIDERS_0_CLIENT_SECRET=YOUR_CLIENT_SECRET

kubectl patch deploy/kratos --type=json \
  -p='[{
    "op": "add",
    "path": "/spec/template/spec/containers/0/envFrom/-",
    "value": {
      "secretRef": {
        "name": "kratos-google-oauth2-credentials"
      }
    }
  }]'

Believe me, it's done, we're done, you're done. 👏

Thank you for sticking around till the end. 🌹

Further Reading

If you liked this piece, you may find the following blog posts to your liking:

Happy hacking and until next time 🫡, ciao. 🐧 🦀

If you enjoyed this blog post, consider sharing it with these buttons 👇. Please leave a comment for us at the end, we read & love 'em all. ❣

Share on Share on Share on Share on

Comments