Skip to content

Getting Started with GitOps and FluxCD

Learn how to leverage your Git repository, the GitOps style, to manage your Kubernetes cluster with FluxCD. Enhance your delivery and reduce deployment frictions with GitOps.

Introduction

GitOps is a modern approach to managing infrastructure and applications. It leverages Git repositories as the source of truth for your infrastructure and application configurations. By using GitOps, you can automate your deployment processes, enhance your delivery pipeline, and reduce deployment frictions.

In this guide, we will explore the fundamentals of GitOps and FluxCD. We will learn how to set up FluxCD in your Kubernetes cluster and automate your deployments.

Prerequisites

Before we start, you need to have the following prerequisites:

  • A Kubernetes cluster up and running

  • A Git repository to store your Kubernetes manifests

  • FluxCD1 binary installed in your PATH (v2.2.3 at the time of writing)
  • Optionally, the GitHub CLI (gh)2 for easier GitHub operations ( v2.47.0 at the time of writing).
  • A basic understanding of Kustomize. A topic for a future post.

What is GitOps?

GitOps is a modern approach to managing infrastructure and applications. It leverages Git repositories as the source of truth for your infrastructure and application configurations. By using GitOps, you can automate your deployment processes, enhance your delivery pipeline, and reduce deployment frictions.

GitOps Definition by Wikipedia

GitOps evolved from DevOps. The specific state of deployment configuration is version-controlled. Because the most popular version-control is Git, GitOps' approach has been named after Git. Changes to configuration can be managed using code review practices, and can be rolled back using version-controlling. Essentially, all of the changes to a code are tracked, bookmarked, and making any updates to the history can be made easier. As explained by Red Hat, "visibility to change means the ability to trace and reproduce issues quickly, improving overall security."3

What is FluxCD?

FluxCD is a popular GitOps operator for Kubernetes. It automates the deployment of your applications and infrastructure configurations by syncing them with your Git repository. FluxCD watches your Git repository for changes and applies them to your Kubernetes cluster.

FluxCD Setup & Automation

Bootstrap refers to the initial setup of FluxCD in your Kubernetes cluster. After which, FluxCD will continuously watch your Git repository for changes and apply them to your cluster.

One of the benefits of using FluxCD during the bootstrap phase is that you can even upgrade FluxCD itself using the same GitOps approach, as you would do with your applications.

That means less manual intervention and more automation, especially if you opt for an automated FluxCD upgrade process4. I don't know about you, but I cannot have enough automation in my life 😁.

Automated FluxCD Upgrade

Since this will not be the topic of today's post, it's worth mentioning as a side note that you can automate the FluxCD upgrade process using the power of your CI/CD pipelines.

For example, you can see a step of a GitHub Action workflow that upgrades FluxCD to the latest version below (source5):

- name: Setup Flux CLI
  uses: fluxcd/flux2/action@main
  with:
    # Flux CLI version e.g. 2.0.0.
    # Defaults to latest stable release.
    version: 'latest'

    # Alternative download location for the Flux CLI binary.
    # Defaults to path relative to $RUNNER_TOOL_CACHE.
    bindir: ''

Step 0: Check Pre-requisites

You can check your if your initial setup is acceptable by FluxCD using the following command:

flux check --pre

Creating the GitHub Repository

Skip this step if you already have a GitHub repository ready for FluxCD.

Repository

FluxCD will create the repository as part of the bootstrap process. This step will only give you flexibility for better customization.

You will need the GitHub CLI2 installed for the following to work.

gh repo create getting-started-with-gitops --clone --public
cd getting-started-with-gitops

Root Reconciler

FluxCD bootstrap is able to create any initial resource you place in its bootstrap path. Which means we will be able to spin up any and all the resources we need alongside FluxCD with only a single command.

That's why, in the same path to the FluxCD bootstrap, we will create a root Kustomization that will control all the subdirectories and reconcile the resources as needed.

This will later be used to create the monitoring stack and all the bells and whistles that come with it.

clusters/dev/k8s.yml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: k8s
  namespace: flux-system
spec:
  interval: 10s
  path: ./dev
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-system
  timeout: 2m
  wait: true

And one of the stacks that will be managed by this root Kustomization are as follows:

resources:
  - namespace.yml
  - repository.yml
  - release.yml
apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: grafana
  namespace: monitoring
spec:
  interval: 10m
  url: https://grafana.github.io/helm-charts
apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
  name: loki-stack
  namespace: monitoring
spec:
  chart:
    spec:
      chart: loki-stack
      sourceRef:
        kind: HelmRepository
        name: grafana
      version: 2.x
  interval: 10m
  timeout: 2m
  values:
    grafana:
      enabled: true
    prometheus:
      enabled: true

Create a GitHub Personal Access Token

We will need a GitHub Personal Access Token7 with the repo scope. You can see token creation screenshot below:

Generating GitHub PAT
Generating GitHub Personal Access Token (PAT)

Use the newly created token for the next step.

Step 1: Bootstrapping FluxCD

We can now spin up FluxCD in our Kubernetes cluster using the following command:

export GITHUB_TOKEN="TOKEN_FROM_THE_LAST_STEP"
export GITHUB_ACCOUNT="developer-friendly"
export GITHUB_REPO="getting-started-with-gitops"
flux bootstrap github \
  --owner=${GITHUB_ACCOUNT} \
  --repository=${GITHUB_REPO} \
  --private=false \
  --personal=true \
  --path=clusters/dev

It will take a moment or two for everything to reconcile, but after that, FluxCD will be up and running in your Kubernetes cluster.

Check the state of the cluster

You can check the status using the following command.

flux check

We can also check the pods, Kustomization and HelmRelease resources.

kubectl get pods -A
kubectl get kustomizations,helmreleases -A # ks,hr for short

The final status of our loki-stack HelmRelease will transition from this:

Running 'install' action with timeout of 2m0s

To this:

Helm install succeeded for release monitoring/loki-stack.v1 with chart [email protected]

Step 2: Monitoring the Cluster

We now have the monitoring stack up and running in our Kubernetes cluster. Let's leverage it to deliver our alerts and notifications to the Prometheus Alertmanager8.

Because of the necessity of monitoring and sane alerting, we need a mechanism to be notified about the events of our cluster based on different severities. That's where FluxCD's notification controller6 comes into play.

In this step we will create a Provider for FluxCD to send notifications and alerts to our in-cluster Alertmanager, after which the admin/operator can decide how to handle them using the AlertmanagerConfig resource.

Alertmanager Configuration

Stay tuned for a future post where we will explore how to configure Alertmanager to send notifications to various channels like Slack, Email, and more.

resources:
  - alertmanager-address.yml
  - alertmanager.yml
  - alert.yml
  - info.yml
apiVersion: v1
kind: Secret
metadata:
  name: alertmanager-address
  namespace: flux-system
stringData:
  address: http://loki-stack-alertmanager.monitoring:9093/api/v2/alerts
type: Opaque
apiVersion: notification.toolkit.fluxcd.io/v1beta3
kind: Provider
metadata:
  name: alertmanager
  namespace: flux-system
spec:
  secretRef:
    name: alertmanager-address
  type: alertmanager

And the notification resources are as follows:

apiVersion: notification.toolkit.fluxcd.io/v1beta3
kind: Alert
metadata:
  name: alert
  namespace: flux-system
spec:
  eventSeverity: error
  eventMetadata:
    severity: error
  eventSources:
  - kind: GitRepository
    name: '*'
    namespace: flux-system
  - kind: Kustomization
    name: '*'
    namespace: flux-system
  - kind: HelmRelease
    name: '*'
    namespace: monitoring
  - kind: Kustomization
    name: '*'
    namespace: default
  providerRef:
    name: alertmanager
  summary: FluxCD reconciliation error
apiVersion: notification.toolkit.fluxcd.io/v1beta3
kind: Alert
metadata:
  name: info
  namespace: flux-system
spec:
  eventSeverity: info
  eventMetadata:
    severity: info
  eventSources:
  - kind: GitRepository
    name: '*'
    namespace: flux-system
  - kind: Kustomization
    name: '*'
    namespace: flux-system
  - kind: HelmRelease
    name: '*'
    namespace: monitoring
  - kind: Kustomization
    name: '*'
    namespace: default
  providerRef:
    name: alertmanager
  summary: FluxCD reconciliation info

There are some important notes worth mentioning here:

  1. We didn't run any kubectl apply command after writing our new manifests and committing them to the repository. FluxCD took care of that behind the scenes. The root reconciler is a Kustomization resource which has a recursive nature and will apply all the kustomization.yml files in the subdirectories.
  2. The alertmanager-address Secret will need to be in the same namespace as the Provider resource. This is due to the design of the Kubernetes itself and has less to do with FluxCD.
  3. Having notifications on different severities allow you and your team to receive highlights about the live state of your cluster as you see fit. This means that you might be interested to route the informational notifications to a muted Slack channel which is likely noisier than the critical alerts, while sending the critical alerts to a pager system that will notify the right people at the right time.

Reconciliation

All the manifests we created so far are committed to the repository and pushed to the remote. We didn't need any kubectl apply command to apply those resources and as long as we write and commit all our manifests under the same tree structure, FluxCD will create them in the cluster.

Step 3: Trigger a Notification

We have created the required resource for the notifications to be sent to the Prometheus' Alertmanager.

To take it for a spin, we can create a sample application to trigger the info notification.

resources:
  - deployment.yml
  - service.yml

configMapGenerator:
  - files:
      - configs.env
    name: echo-server

images:
  - name: jmalloc/echo-server
    newTag: 0.3.6

namespace: default
LOG_HTTP_HEADERS=STDOUT
LOG_HTTP_BODY=STDOUT
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: echo-server
  template:
    strategy:
      type: RollingUpdate
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
    metadata:
      labels:
        app: echo-server
    spec:
      containers:
        - name: echo-server
          image: jmalloc/echo-server
          ports:
            - containerPort: 8080
              name: http
          envFrom:
            - configMapRef:
                name: echo-server
apiVersion: v1
kind: Service
metadata:
  name: echo-server
spec:
  ports:
    - name: http
      port: 80
      targetPort: http
  selector:
    app: echo-server

We won't go into much detail for the Kustomize resource as that is a topic for another post and deserves more depth.

However, pay close attention to the syntax of configs.env and the way we have employed configMapGenerator in the kustomization.yml file.

This will ensure that for every change to the configs.env file, the resulting ConfigMap resource will be re-created with a new hash-suffixed name, which will consequently restart the Deployment resource and re-read the new values9.

This is an important highlight cause you have to specify your Deployment strategy carefully if you want to avoid downtime in your applications.

Kustomize

We will dive into Kustomize and all its powerful and expressive features in a future post. Stay tuned to learn more about it.

To see that our notification has arrived at Alertmanager, we will jump over to the Alertmanager service using port forwarding technique, although in a real world scenario, you'd expose it through either an Ingress Controller or a Gateway API (a topic for another post 😉).

kubectl port-forward -n monitoring svc/loki-stack-alertmanager 9093:9093 &

Sure enough, if we open http://localhost:9093, we will see the notification in the Alertmanager UI as seen in the screenshot below.

Alertmanager UI info triggered
Alertmanager UI info triggered

Trigger a Critical Alert

Now, let's break the app to see if the severity of the notification changes as expected.

dev/echo-server/kustomization.yml
resources:
  - deployment.yml
  - service.yml

configMapGenerator:
  - files:
      - configs.env
    name: echo-server

images:
  - name: jmalloc/echo-server
    newTag: non-existent-tag

namespace: default

And lo and behold, the Alertmanager UI will now show the critical alert as seen below.

Alertmanager UI error triggered
Alertmanager UI error triggered

To restore the application to its normal state, you can revert the changes, commit to the repository and let FluxCD do its magic.

Conclusion

That concludes our guide on getting started with GitOps and FluxCD. We have covered most of the essential components and concepts of GitOps and FluxCD.

We have deployed the monitoring stack right out of the box and provided the minimum working example10 on how to structure your repository in a way that reduces the friction of your deployments in an automated and GitOps fashion.

Lastly, we have deployed an application and triggered both informational and critical alerts to the Prometheus Alertmanager. By observing the notifications in the Alertmanager UI, we have seen how the notifications are routed based on their severity.

In a future post, we will explore more integrations with this setup on how to route the notifications on Alertmanager to external services like Slack, Discord, etc. and how to manage your secrets in a secure way so that you wouldn't have to commit them to your repository.

Another topic we didn't cover here was Receiver resource. That will require internet access to your cluster, which we'll cover at a later post when discussing the Kubernetes Gateway API11.

Until next time, ciao 🐧 🦀 & happy coding! 🤓

Source Code

The full repository is publicly available on GitHub12 under the Apache 2.0 license.