Skip to content

Migration From Promtail to Alloy: The What, the Why, and the How

Promtail is (was) the lightweight log collector solution that sends the log over the HTTP to the remote backend. This remote backend is normally Loki but you can choose to send the logs to VictoriaLogs as well.

In this blog post, you will see the newer alternative to Promtail, Grafana Alloy. You will see what it is, why it's a good idea to migrate, and the how-to guide to make the jump with least friction.

Introduction

Due to the recent decision by the Grafana team to deprecate the Promtail1, I've got the chance to revisit my logging stack.

Promtail has always been my default & favorite choice in the last few years due to its simplicity and negligible overhead.

It was powerful to collect all sorts of logs, whether pods of Kubernetes or Journal logs of the Linux operating system2.

However, from the outside and by the looks of it, it appears that the team was bearing a lot of difficulty maintaining that solution alongside with their other powerful solutions.

And so it happens that they have decided to let go of it, integrating its powerful features into the currently supported Grafana Alloy3.

In addition to the fear of change4, I was hesitant and didn't want to go through with the migration because of my very good reasons:

  1. Promtail is lightweight. Download the compiled Golang binary and run it anywhere and everywhere. What can be more desireable than that!?
  2. Promtail has support for almost all the log implementations. There is a good chance that you can scrape anything with Promtail and ship it to the remote backend.

And so, after many back'n forth, I decided to jump ship, finally! 🛳

This blog post and today's story is what I've learned along the way and some of the cool features that I discovered from Alloy that I may or may not have had with Promtail.

Disclaimer

This blog post is not sponsored by Grafana in anyway. I don't get a single dime promoting their products.

I am just a happy user. 😇

P.S. I write and promote any opensource software I find compelling; this is the main focus of all the blog posts here.

What is Alloy?

Alloy is your one-stop shop for any scraping; be it metrics, logging or tracing5.

In a nutshell, the simplified version of how Alloy works can be boiled down to this:

  • It receives inputs and data from many adapters (hence the name receiver).
  • It may or may not apply some processing on the received data.
  • Lastly, it ports those data to the specified backend.
flowchart TD
    A[Receivers] --> B{Optional: Processors}
    B --> C[Exporters]

It has native-support for many of the available receivers and exporters and using its powerful processors, you can do all sorts of crazy stuff like relabling, sampling, decolorizing, reformatting, etc6.

If you're writing codes in any modern porgramming language, you will almost certainly have a way to collect all the telemetry data you require from your application with Grafana Alloy.

Known Competitor

For those of you coming from OpenTelemetry world, this is the alternative to OpenTelemetry Collector.

Why Grafana Alloy?

Just to name a few, here are the highlighting features of what Alloy is capable of:

  • Collecting logs from Linux journal
  • Capable of discovering targets with native support for Kubernetes
  • Batching metrics, logs and traces to reduce network traffic overhead
  • Sampling and downsampling to reduce noise and data size
  • Relabel data before pushing them to the remote backend
  • Support almost all the storage backends in the observabilty world
  • Native support for OpenTelemetry protocol
  • Compatible with Prometheus and Loki API, and as a result, VictoriaMetrics & VictoriaLogs7.

Grafana Ecosystem

Another smooth benefit of Alloy is its native integration with the rest of the Grafana ecosystem; this may be considered harmful if you want to avoid vendor lock-in, however, if you're using any of the other products from Grafana this will be to your advantage.

Now, these are most likely just scratching the surface of what Alloy is capable of.

But, to tell you truth, this more than enough for what I want from a collector agent! 💪

How to Deploy Grafana Alloy?

So far, we've only seen what it is and what it can do. But, how do we deploy Alloy is the focus of the remaining of this blog post.

First and foremost, if you're running on a Promtail stack, you would want to migrate your current config with little to no friction.

Alloy CLI has you covered8.

With alloy convert, you can pass in your current promtail.yml file and get a compatible configuration in HCL format that can be passed into the Alloy binary9.

Alloy CLI

Just like Grafana Promtail, Alloy is written in Golang.

This has a great upside to it; you can grab a compiled binary from the GitHub release page and use it as is, with all the batteries included11.

If you're already running promtail in your Kubernetes cluster:

# Get your current config
kubectl get -n monitoring secret/promtail \
  -o jsonpath='{.data.promtail\.yaml}' \
  | \base64 -d \
  | tee promtail.yml

# Convert it with native support
alloy convert --source-format=promtail \
  --bypass-errors -o alloy.hcl promtail.yml

And if you're just starting out, you might wanna use the proven configuration from the Promtail Helm chart.

helm template promtail grafana/promtail --version=6.16.x \
  | tee promtail.yml

In the promtail.yml file, look for kind: Secret and grab the content of data.stringData.

promtail.yml
---
# ...truncated...

# Source: promtail/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: promtail
  namespace: default
  labels:
    helm.sh/chart: promtail-6.16.6
    app.kubernetes.io/name: promtail
    app.kubernetes.io/instance: promtail
    app.kubernetes.io/version: "3.0.0"
    app.kubernetes.io/managed-by: Helm
stringData:
  promtail.yaml: |
    server:
      log_level: info
      log_format: logfmt
      http_listen_port: 3101


    clients:
      - url: http://loki-gateway/loki/api/v1/push

    positions:
      filename: /run/promtail/positions.yaml

    scrape_configs:
      # See also https://github.com/grafana/loki/blob/master/production/ksonnet/promtail/scrape_config.libsonnet for reference
      - job_name: kubernetes-pods
        pipeline_stages:
          - cri: {}
        kubernetes_sd_configs:
          - role: pod
        relabel_configs:
          - source_labels:
              - __meta_kubernetes_pod_controller_name
            regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
            action: replace
            target_label: __tmp_controller_name
          - source_labels:
              - __meta_kubernetes_pod_label_app_kubernetes_io_name
              - __meta_kubernetes_pod_label_app
              - __tmp_controller_name
              - __meta_kubernetes_pod_name
            regex: ^;*([^;]+)(;.*)?$
            action: replace
            target_label: app
          - source_labels:
              - __meta_kubernetes_pod_label_app_kubernetes_io_instance
              - __meta_kubernetes_pod_label_instance
            regex: ^;*([^;]+)(;.*)?$
            action: replace
            target_label: instance
          - source_labels:
              - __meta_kubernetes_pod_label_app_kubernetes_io_component
              - __meta_kubernetes_pod_label_component
            regex: ^;*([^;]+)(;.*)?$
            action: replace
            target_label: component
          - action: replace
            source_labels:
            - __meta_kubernetes_pod_node_name
            target_label: node_name
          - action: replace
            source_labels:
            - __meta_kubernetes_namespace
            target_label: namespace
          - action: replace
            replacement: $1
            separator: /
            source_labels:
            - namespace
            - app
            target_label: job
          - action: replace
            source_labels:
            - __meta_kubernetes_pod_name
            target_label: pod
          - action: replace
            source_labels:
            - __meta_kubernetes_pod_container_name
            target_label: container
          - action: replace
            replacement: /var/log/pods/*$1/*.log
            separator: /
            source_labels:
            - __meta_kubernetes_pod_uid
            - __meta_kubernetes_pod_container_name
            target_label: __path__
          - action: replace
            regex: true/(.*)
            replacement: /var/log/pods/*$1/*.log
            separator: /
            source_labels:
            - __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
            - __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
            - __meta_kubernetes_pod_container_name
            target_label: __path__



    limits_config:


    tracing:
      enabled: false

# ...truncated...

With that configuration, we can once again run alloy convert and get a head-start on our journey to deploy the Grafana Alloy.

Here's what the base configuration will look like if you convert the latest promtail.yml configuration.

alloy convert
discovery.kubernetes "kubernetes_pods" {
    role = "pod"
}

discovery.relabel "kubernetes_pods" {
    targets = discovery.kubernetes.kubernetes_pods.targets

    rule {
        source_labels = ["__meta_kubernetes_pod_controller_name"]
        regex         = "([0-9a-z-.]+?)(-[0-9a-f]{8,10})?"
        target_label  = "__tmp_controller_name"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name", "__meta_kubernetes_pod_label_app", "__tmp_controller_name", "__meta_kubernetes_pod_name"]
        regex         = "^;*([^;]+)(;.*)?$"
        target_label  = "app"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_instance", "__meta_kubernetes_pod_label_instance"]
        regex         = "^;*([^;]+)(;.*)?$"
        target_label  = "instance"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_component", "__meta_kubernetes_pod_label_component"]
        regex         = "^;*([^;]+)(;.*)?$"
        target_label  = "component"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_node_name"]
        target_label  = "node_name"
    }

    rule {
        source_labels = ["__meta_kubernetes_namespace"]
        target_label  = "namespace"
    }

    rule {
        source_labels = ["namespace", "app"]
        separator     = "/"
        target_label  = "job"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_name"]
        target_label  = "pod"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_container_name"]
        target_label  = "container"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
        separator     = "/"
        target_label  = "__path__"
        replacement   = "/var/log/pods/*$1/*.log"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash", "__meta_kubernetes_pod_annotation_kubernetes_io_config_hash", "__meta_kubernetes_pod_container_name"]
        separator     = "/"
        regex         = "true/(.*)"
        target_label  = "__path__"
        replacement   = "/var/log/pods/*$1/*.log"
    }
}

local.file_match "kubernetes_pods" {
    path_targets = discovery.relabel.kubernetes_pods.output
}

loki.process "kubernetes_pods" {
    forward_to = [loki.write.default.receiver]

    stage.cri { }
}

loki.source.file "kubernetes_pods" {
    targets               = local.file_match.kubernetes_pods.targets
    forward_to            = [loki.process.kubernetes_pods.receiver]
    legacy_positions_file = "/run/promtail/positions.yaml"
}

loki.write "default" {
    endpoint {
        url = "http://loki-gateway/loki/api/v1/push"
    }
    external_labels = {}
}

Customizing the Configuration

This converted config above is a good start, but it doesn't cut it for me! 😅

I would save you the headache and give you the complete end-result in one go. However, I am providing the explanation details right after!

If you know what this config does, feel free to skip the rest of this blog post. 🙌

alloy.hcl
##########################################################
#                        GENERAL
##########################################################

livedebugging {
    enabled = true
}

##########################################################
#                        LOGGING
##########################################################

discovery.kubernetes "kubernetes_pods" {
    role = "pod"
}

discovery.relabel "kubernetes_pods" {
    targets = discovery.kubernetes.kubernetes_pods.targets

    rule {
        source_labels = ["__meta_kubernetes_pod_controller_name"]
        regex         = "([0-9a-z-.]+?)(-[0-9a-f]{8,10})?"
        target_label  = "__tmp_controller_name"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name", "__meta_kubernetes_pod_label_app", "__tmp_controller_name", "__meta_kubernetes_pod_name"]
        regex         = "^;*([^;]+)(;.*)?$"
        target_label  = "app"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_instance", "__meta_kubernetes_pod_label_instance"]
        regex         = "^;*([^;]+)(;.*)?$"
        target_label  = "instance"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_component", "__meta_kubernetes_pod_label_component"]
        regex         = "^;*([^;]+)(;.*)?$"
        target_label  = "component"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_node_name"]
        target_label  = "node_name"
    }

    rule {
        source_labels = ["__meta_kubernetes_namespace"]
        target_label  = "namespace"
    }

    rule {
        source_labels = ["namespace", "app"]
        separator     = "/"
        target_label  = "job"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_name"]
        target_label  = "pod"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_container_name"]
        target_label  = "container"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
        separator     = "/"
        target_label  = "__path__"
        replacement   = "/var/log/pods/*$1/*.log"
    }

    rule {
        source_labels = ["__meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash", "__meta_kubernetes_pod_annotation_kubernetes_io_config_hash", "__meta_kubernetes_pod_container_name"]
        separator     = "/"
        regex         = "true/(.*)"
        target_label  = "__path__"
        replacement   = "/var/log/pods/*$1/*.log"
    }
}

local.file_match "kubernetes_pods" {
    path_targets = discovery.relabel.kubernetes_pods.output
}

loki.process "kubernetes_pods" {
    forward_to = [loki.write.default.receiver]

    stage.cri { }

    stage.decolorize { }

    stage.drop {
        expression = ".*(\\/health|\\/metrics|\\/ping).*"
    }
}

loki.source.file "kubernetes_pods" {
    targets               = local.file_match.kubernetes_pods.targets
    forward_to            = [loki.process.kubernetes_pods.receiver]
    legacy_positions_file = "/run/promtail/positions.yaml"
}

discovery.relabel "systemd_journal" {
    targets = []

    rule {
        source_labels = ["__journal__systemd_unit"]
        target_label  = "unit"
    }

    rule {
        source_labels = ["__journal__hostname"]
        target_label  = "hostname"
    }

    rule {
        source_labels = ["__journal__boot_id"]
        target_label  = "boot_id"
    }

    rule {
        source_labels = ["__journal__machine_id"]
        target_label  = "machine_id"
    }

    rule {
        source_labels = ["__journal__priority"]
        target_label  = "priority"
    }

    rule {
        source_labels = ["__journal__syslog_identifier"]
        target_label  = "syslog_identifier"
    }

    rule {
        source_labels = ["__journal__transport"]
        target_label  = "transport"
    }

    rule {
        source_labels = ["unit"]
        target_label  = "_stream"
        replacement   = "unit=\"$1\""
    }
}

loki.source.journal "systemd_journal" {
    path          = "/var/log/journal"
    relabel_rules = discovery.relabel.systemd_journal.rules
    forward_to    = [loki.write.default.receiver]
    labels        = {}
}

loki.source.kubernetes_events "cluster_events" {
    job_name   = "integrations/kubernetes/eventhandler"
    log_format = "logfmt"
    forward_to = [
        loki.process.cluster_events.receiver,
    ]
}

loki.process "cluster_events" {
    forward_to = [loki.write.default.receiver]

    stage.regex {
        expression = ".*name=(?P<name>[^ ]+).*kind=(?P<kind>[^ ]+).*objectAPIversion=(?P<apiVersion>[^ ]+).*type=(?P<type>[^ ]+).*"
    }

    stage.labels {
        values = {
            kubernetes_cluster_events = "job",
            name                      = "name",
            kind                      = "kind",
            apiVersion                = "apiVersion",
            type                      = "type",
        }
    }
}

loki.write "default" {
    endpoint {
        url       = "http://vlogs-victorialogs.monitoring:9428/insert/loki/api/v1/push?_stream_fields=instance,job,host,app&disable_message_parsing=1"
        tenant_id = "0:0"
    }
    external_labels = {}
}

##########################################################
#                        TRACING
##########################################################

otelcol.receiver.otlp "default" {
    grpc {
        endpoint = "0.0.0.0:4317"
    }

    http {
        endpoint = "0.0.0.0:4318"
    }

    output {
        metrics = [otelcol.processor.batch.default.input]
        logs    = [otelcol.processor.batch.default.input]
        traces  = [otelcol.connector.servicegraph.default.input, otelcol.processor.batch.default.input]
    }
}

otelcol.connector.servicegraph "default" {
    dimensions = ["http.method"]

    debug_metrics { }

    output {
        metrics = [otelcol.exporter.prometheus.default.input]
    }
}

otelcol.processor.batch "default" {
    output {
        metrics = [otelcol.exporter.otlp.default.input]
        logs    = [otelcol.exporter.otlp.default.input]
        traces  = [otelcol.exporter.otlp.default.input]
    }
}

otelcol.exporter.otlp "default" {
    client {
        endpoint = "tempo.monitoring:4317"

        tls {
            insecure = true
        }
    }
}

otelcol.exporter.prometheus "default" {
    forward_to = [prometheus.remote_write.default.receiver]
}

##########################################################
#                        METRICS
##########################################################

discovery.kubernetes "services" {
  role = "service"
}

prometheus.scrape "services" {
  targets    = discovery.kubernetes.services.targets
  forward_to = [prometheus.remote_write.default.receiver]
}

prometheus.remote_write "default" {
    endpoint {
        url = "http://vmsingle-victoria-metrics-k8s-stack.monitoring:8429/api/v1/write"
    }
}

For this alloy.hcl configuration to work properly within a Kubernetes pod, you need the following Helm values during the installation of Alloy chart10.

alloy/helm-values.yml
alloy:
  extraPorts:
    - name: otlp-grpc
      port: 4317
      targetPort: 4317
      protocol: TCP
    - name: otlp-http
      port: 4318
      targetPort: 4318
      protocol: TCP

  mounts:
    # -- Mount /var/log from the host into the container for log collection.
    varlog: true

To install Alloy with this configuration, here's the Helm command:

helm install alloy grafana/alloy \
  --version 0.12.x \
  --namespace monitoring \
  --create-namespace \
  --set-file alloy.configMap.content=alloy.hcl \
  -f alloy/helm-values.yml

Now onwards to mention a few words about each block in the configuration above and why they are present in my config file.

Declutter Logs

Firstly, I would get rid of the colored logs, as well as healthchecks.

Although these shouldn't even be printed out to stdout in a production setup, but let's just give our developers some slack!

alloy.hcl
loki.process "kubernetes_pods" {
    forward_to = [loki.write.default.receiver]

    stage.cri { }

    stage.decolorize { }

    stage.drop {
        expression = ".*(\\/health|\\/metrics|\\/ping).*"
    }
}

Collect Linux Journal Logs

Additionally, I want to collect the logs of the host operating system. Yes, even when I am operating at a containerized application deployment! 😬

That comes with the loki.source.journal block, which of course has some relabling applied to it.

alloy.hcl
loki.source.journal "systemd_journal" {
    path          = "/var/log/journal"
    relabel_rules = discovery.relabel.systemd_journal.rules
    forward_to    = [loki.write.default.receiver]
    labels        = {}
}

Notice how I send the journal logs to the same remote backend.

It's important to mention that the Alloy Helm should accomodate for the mount of host journal path as follows:

alloy/helm-values.yml
alloy:
  extraPorts:
    - name: otlp-grpc
      port: 4317
      targetPort: 4317
      protocol: TCP
    - name: otlp-http
      port: 4318
      targetPort: 4318
      protocol: TCP

  mounts:
    # -- Mount /var/log from the host into the container for log collection.
    varlog: true

Collect Kubernetes Cluster Events as Logs

Furthermore, I would want to scrape the Kubernetes cluster events and ship them to the same storage backend, as if they were logs.

That comes with the native supported loki.source.kubernetes_events block.

alloy.hcl
loki.source.kubernetes_events "cluster_events" {
    job_name   = "integrations/kubernetes/eventhandler"
    log_format = "logfmt"
    forward_to = [
        loki.process.cluster_events.receiver,
    ]
}

Ship Logs to VictoriaLogs

Lastly for my logs configuration, I would want to send my logs to the VictoriaLogs12, instead of Loki.

alloy.hcl
loki.write "default" {
    endpoint {
        url       = "http://vlogs-victorialogs.monitoring:9428/insert/loki/api/v1/push?_stream_fields=instance,job,host,app&disable_message_parsing=1"
        tenant_id = "0:0"
    }
    external_labels = {}
}

Collect Tracing With Alloy

Next item in the agenda is to be able to collect more than just logs with Grafana Alloy.

I mean, if I want to collect three pillars of observability from my stack, the metrics, logs and tracing data, why the hell would I want to use Alloy only for logging and have at least one other pod to collect the tracing (i.e. the OpenTelemetry collector13).

Best case scenario, I would get rid of the OTel collector and use Alloy to scrape both the logs and the tracing and ship them to the correct storage backend. Wouldn't you!?14

To be able to collect tracing information with Grafana Alloy, we open up the corresponding endpoints with the otelcol.receiver.otlp.

alloy.hcl
otelcol.receiver.otlp "default" {
    grpc {
        endpoint = "0.0.0.0:4317"
    }

    http {
        endpoint = "0.0.0.0:4318"
    }

    output {
        metrics = [otelcol.processor.batch.default.input]
        logs    = [otelcol.processor.batch.default.input]
        traces  = [otelcol.connector.servicegraph.default.input, otelcol.processor.batch.default.input]
    }
}

Again, to be able to send tracing with OTLP protocol to Alloy, you have to open up the Kubernetes Service accordingly.

alloy/helm-values.yml
alloy:
  extraPorts:
    - name: otlp-grpc
      port: 4317
      targetPort: 4317
      protocol: TCP
    - name: otlp-http
      port: 4318
      targetPort: 4318
      protocol: TCP

  mounts:
    # -- Mount /var/log from the host into the container for log collection.
    varlog: true

Tracing ServiceGraph

Alloy, just like OpenTelemery Collector, is able to ship service graph data to Prometheus-compatible endpoints, allowing us to view the graph of our services in the Tempo backend.

alloy.hcl
otelcol.connector.servicegraph "default" {
    dimensions = ["http.method"]

    debug_metrics { }

    output {
        metrics = [otelcol.exporter.prometheus.default.input]
    }
}

Configuring the Grafana Datasource15 with the following spec, will give us the dashboard you see in the next screenshot.

grafana/datasource-tempo.yml
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDatasource
metadata:
  name: tempo
spec:
  allowCrossNamespaceImport: true
  datasource:
    access: proxy
    basicAuth: false
    database: ""
    editable: false
    isDefault: false
    name: Tempo
    orgId: 1
    jsonData:
      httpMethod: GET
      tracesToMetrics:
        datasourceUid: victoriametrics
      serviceMap:
        datasourceUid: victoriametrics
    secureJsonData: {}
    type: tempo
    uid: tempo
    url: http://tempo.monitoring:3100
    user: ""
  instanceSelector:
    matchLabels:
      dashboards: grafana
  resyncPeriod: 10m
Grafana Tempo ServiceGraph
Grafana Tempo ServiceGraph

Collect Prometheus Metrics

Now, this is not something I would generally recommend doing.

I don't even do it myself.

But you can scrape Prometheus metrics from your Kubernetes cluster and ship them the same way using Grafana Alloy.

alloy.hcl
discovery.kubernetes "services" {
  role = "service"
}

prometheus.scrape "services" {
  targets    = discovery.kubernetes.services.targets
  forward_to = [prometheus.remote_write.default.receiver]
}

Why wouldn't I do that? Because the rest of the industry is settled on using ServiceMonitor and PodMonitor when it comes to metrics scraping.

In any Helm chart all you gotta do is to enable the serviceMonitor or the metrics in its values.yml file. The corresponding CRD will be created and your monitoring stack will automatically pick it up.

And just so it happens that the VictoriaMetrics team have native support16 for converting Kube Prometheus Stack into that of VictoriaMetrics K8s Stack17.

So, even if takes the rest of the industry a while before they have native support for VMServiceScrape18 and VMPodScrape19, you can still benefit a lot by using VMAgent to scrape your metrics.

All in all, I wouldn't use Grafana Alloy to collect metrics from my Kubernetes cluster because it will lock you in on a single vendor. 🔒

But, if you really must, you can do that with Alloy using the prometheus.scrape configuration block.

Before we close this off, here's what the Alloy UI looks like with the configuration you have seen earlier.

Tempo UI
Tempo UI

Each of the boxes above are clickable. You will be provided with the arguments, the inputs and the outputs for each of them and some will even support live debugging20.

Whenever I can't figure out why my pipelines are not working correctly, I visit this dashboard and can quickly spot the issue.

This is one of the coolest feature of Grafana Alloy in my opinion.

Conclusion

In this blog post you've seen what Grafana Alloy is capable of and how easy it is to migrate your current Promtail config into a working setup supported by Alloy.

If you haven't done so already, you now have all the good reasons to migrate your Promtail agents because by the time your read this, they have already reached end of life support.

You wouldn't want to run your dependencies and services in your workload that are no longer maintained, now would you!? 😉

Until next time 🫡, ciao 🤠 & happy coding! 🐧

Subscribe to Newsletter Subscribe to RSS Feed

Share on Share on Share on Share on

Comments