Deploy Consul as OpenTofu Backend with Azure & Ansible¶
In this blog post, we provide the necessary steps to setup a single-node standalone Consul server to be used as TF state backend.
In doing so, we aim to provide idempotent and reproducible codes using Tofu and Ansible, for the sake of disaster recovery as well as enabling team collaboration within version control system.
Introduction¶
Having a remote OpenTofu state backend is crucial for any Infrastructure as Code. Not only it survives the possible crash of one single administrator's machine, but also provides team collaboration, state locking, and many other cool features that are required for a production setup.
The objective of this blog post is to set up the Hashicorp Consul on a single Azure virtual machine1.
While this setup is not redundant, nor high available or resilient, it's still a big win for teams that require simple deployments and can afford the risk of running workloads on one single instance!
Although I do not recommend this for big teams, this approach works pretty descent for small setups.
Objectives¶
The following is the list of all the requirement this small project aims to cover:
Deploy Consul on a single node; for simplicity, there's no redundancy!
The Consul server has to be served behind TLS to allow encryption in-transit.
The server running Consul must have backups enabled to allow for fast recovery in case of crash.
The disk of the node must be encrypted; we don't care if the key is platform-managed!
The entire configuration should be idempotent, allowing reproducibility as well as team collaboration.
These requirements are the main highlights that we aim to tackle in the rest of this blog post.
There are some opiniated toolings employed here that may or may not sit well with you; you may prefer alternatives or find the proposed approaches hard to comprehend and/or maintain.
It is fine though. There isn't a single best way to handle the requirements here. Find and pick what works best for you.
With all that chatter out of the way, let's get serious.
Here's the directory structure that will be covered in here.
.
├── 10-vm
│ ├── cloud-init.yml
│ ├── main.tf
│ ├── output.tf
│ ├── terragrunt.hcl
│ └── versions.tf
├── 20-bootstrap-consul
│ ├── ansible.cfg
│ ├── inventory
│ │ └── azure_rm.yml
│ ├── playbook.yml
│ ├── requirements.yml
│ └── roles/
│ ├── acme/
│ ├── consul/
│ ├── firewall/
│ └── haproxy/
├── 30-verify-state-backend
│ ├── main.tf
│ ├── terragrunt.hcl
│ └── versions.tf
└── azurerm.hcl
Prerequisites¶
The following are the tools used on the local machine:
Creating an Azure Virtual Machine¶
NOTE: If you already have a VM/server, skip this step!
The following OpenTofu stack is the minimal Infrastructure as Code that will boot up a VM in the Azure cloud.
Although some may prefer using ready-made and off-the-shelf TF modules, I for one, prefer writing my own resources for one very important reason.
Although using TF modules can speed up the development initially, the maintenance cost of upgrades and compatibility outweighs the benefit.
I eventually stopped upgrading my TF modules because they keep introducing changes in backward incompatible ways, making it an absolute nightmare just to keep the lights running!
So, here's my own simple code, and it'll work for infinity so long as the upstream provider doesn't mess up their API! Even when the provider is pinned to major version.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "< 5"
}
# DNS provider
cloudflare = {
source = "cloudflare/cloudflare"
version = "< 6"
}
# SSH private key generation
tls = {
source = "hashicorp/tls"
version = "< 5"
}
}
required_version = "< 2"
}
#cloud-config
packages:
- certbot
- curl
- fail2ban
- file
- haproxy
- jq
- python3
- python3-pip
- unzip
- yq
package_update: true
package_upgrade: true
power_state:
delay: 1
mode: reboot
message: Rebooting machine
runcmd:
- printf "[sshd]\nenabled = true\nbanaction = iptables-multiport" > /etc/fail2ban/jail.local
- systemctl enable fail2ban
- sed -i -e '/^\(#\|\)PermitRootLogin/s/^.*$/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config
- sed -i -e '/^\(#\|\)PasswordAuthentication/s/^.*$/PasswordAuthentication no/' /etc/ssh/sshd_config
- sed -i -e '/^\(#\|\)KbdInteractiveAuthentication/s/^.*$/KbdInteractiveAuthentication no/' /etc/ssh/sshd_config
- sed -i -e '/^\(#\|\)ChallengeResponseAuthentication/s/^.*$/ChallengeResponseAuthentication no/' /etc/ssh/sshd_config
- sed -i -e '/^\(#\|\)MaxAuthTries/s/^.*$/MaxAuthTries 2/' /etc/ssh/sshd_config
- sed -i -e '/^\(#\|\)X11Forwarding/s/^.*$/X11Forwarding no/' /etc/ssh/sshd_config
- sed -i -e '/^\(#\|\)AllowAgentForwarding/s/^.*$/AllowAgentForwarding no/' /etc/ssh/sshd_config
- sed -i -e '/^\(#\|\)AuthorizedKeysFile/s/^.*$/AuthorizedKeysFile .ssh\/authorized_keys/' /etc/ssh/sshd_config
- |
# Fedora 41 instructions
sudo dnf install -y dnf-plugins-core
sudo dnf config-manager addrepo --from-repofile=https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
sudo dnf -y install consul
consul version
##########################################################
# PARENT
##########################################################
resource "azurerm_resource_group" "this" {
name = "rg-tf-state-backend"
location = "Germany West Central"
}
##########################################################
# SECRETS
##########################################################
resource "tls_private_key" "this" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "azurerm_ssh_public_key" "this" {
name = "ssh-key-tf-state-backend"
resource_group_name = azurerm_resource_group.this.name
location = azurerm_resource_group.this.location
public_key = tls_private_key.this.public_key_openssh
}
##########################################################
# NETWORKING
##########################################################
resource "azurerm_virtual_network" "this" {
name = "vnet-tf-state-backend"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.this.location
resource_group_name = azurerm_resource_group.this.name
}
resource "azurerm_subnet" "this" {
name = "snet-tf-state-backend"
resource_group_name = azurerm_resource_group.this.name
virtual_network_name = azurerm_virtual_network.this.name
address_prefixes = ["10.0.1.0/24"]
}
resource "azurerm_public_ip" "this" {
name = "pip-tf-state-backend"
location = azurerm_resource_group.this.location
resource_group_name = azurerm_resource_group.this.name
allocation_method = "Static"
}
resource "azurerm_network_interface" "this" {
name = "nic-tf-state-backend"
location = azurerm_resource_group.this.location
resource_group_name = azurerm_resource_group.this.name
ip_configuration {
name = "ipconfig1"
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.this.id
primary = true
subnet_id = azurerm_subnet.this.id
}
}
##########################################################
# SECURITY
##########################################################
resource "azurerm_network_security_group" "this" {
name = "nsg-tf-state-backend"
location = azurerm_resource_group.this.location
resource_group_name = azurerm_resource_group.this.name
security_rule {
name = "SSH"
priority = 1001
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "HTTP"
priority = 1002
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "80"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "HTTPS"
priority = 1003
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "443"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_network_interface_security_group_association" "this" {
network_interface_id = azurerm_network_interface.this.id
network_security_group_id = azurerm_network_security_group.this.id
}
##########################################################
# COMPUTE
##########################################################
resource "azurerm_linux_virtual_machine" "this" {
name = "tf-state-backend"
resource_group_name = azurerm_resource_group.this.name
location = azurerm_resource_group.this.location
# ARM, 4 vCPUs, 8 GiB RAM, $86/month
size = "Standard_B4pls_v2"
computer_name = "tf-state-backend"
admin_username = "devblog"
network_interface_ids = [
azurerm_network_interface.this.id,
]
identity {
type = "SystemAssigned"
}
admin_ssh_key {
username = "devblog"
public_key = azurerm_ssh_public_key.this.public_key
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
# ref: https://az-vm-image.info/?cmd=--all+--offer+fedora
source_image_reference {
publisher = "nuvemnestllc1695391252715"
offer = "id-01-fedora-41-arm64"
sku = "id-01-fedora-41-arm64"
version = "latest"
}
plan {
name = "id-01-fedora-41-arm64"
publisher = "nuvemnestllc1695391252715"
product = "id-01-fedora-41-arm64"
}
custom_data = base64encode(file("${path.module}/cloud-init.yml"))
lifecycle {
ignore_changes = [
custom_data,
]
}
}
##########################################################
# Backup
##########################################################
resource "azurerm_recovery_services_vault" "this" {
name = "tf-state-backend-rsv"
location = azurerm_resource_group.this.location
resource_group_name = azurerm_resource_group.this.name
sku = "Standard"
soft_delete_enabled = true
}
resource "azurerm_backup_policy_vm" "this" {
name = "tf-state-backend-backup-policy"
resource_group_name = azurerm_resource_group.this.name
recovery_vault_name = azurerm_recovery_services_vault.this.name
timezone = "UTC"
backup {
frequency = "Daily"
time = "23:00"
}
retention_daily {
count = 14
}
retention_weekly {
count = 4
weekdays = ["Sunday"]
}
retention_monthly {
count = 6
weekdays = ["Sunday"]
weeks = ["First"]
}
}
resource "azurerm_backup_protected_vm" "this" {
resource_group_name = azurerm_resource_group.this.name
recovery_vault_name = azurerm_recovery_services_vault.this.name
source_vm_id = azurerm_linux_virtual_machine.this.id
backup_policy_id = azurerm_backup_policy_vm.this.id
}
##########################################################
# DNS
##########################################################
data "cloudflare_zone" "this" {
filter = {
name = "developer-friendly.blog"
}
}
resource "cloudflare_dns_record" "this" {
zone_id = data.cloudflare_zone.this.zone_id
content = azurerm_public_ip.this.ip_address
name = "tofu.developer-friendly.blog"
proxied = false
ttl = 60
type = "A"
}
If you're struggling to find Azure images for your VMs like me, you will find alternative online methods very useful5.
output "public_ip" {
value = azurerm_public_ip.this.ip_address
}
output "ssh_private_key" {
value = tls_private_key.this.private_key_pem
sensitive = true
}
Running this stack is pretty simple at this point:
We will require the SSH private key for the next step:
Bootstrap the Consul Server¶
We are now ready to run a bunch of Ansible playbook tasks to configure our server.
Note that this is a Fedora machine and we will mostly use ansible.builtin.package
for package installation. We care little to none about portability, e.g., to use ansible.builtin.package
!
While working for the codes for this blog post, I slowly fell in love with how Fedora works with packages.
It makes it so easy to grab the latest available version of each package; whatever you need is always one dnf
away from you!
Coming from Ubuntu background and using Ubuntu-based desktop all my life, I have never had such a great sysadmin experience before in my life. Fedora is awesome.
Disable Fedora's Firewall¶
Now let's start with the bare essentials. We first need to disable the firewall of the host, since Azure cloud already has a Security Group configured in front of it.
---
- name: Stop the firewalld.service
ansible.builtin.systemd:
name: firewalld
state: stopped
enabled: false
Configure the Consul Server¶
This step is the most important part of this blog post.
If you skipped all the other sections, then this is the only one you should care about.
consul.hcl¶
server = true
bootstrap_expect = 1
datacenter = "dc1"
node_name = "consul-0"
bind_addr = "0.0.0.0"
client_addr = "0.0.0.0"
data_dir = "/var/lib/consul"
ports {
http = 8500
grpc = 8502
}
log_level = "INFO"
ui_config {
enabled = true
}
acl {
enabled = true
default_policy = "deny"
enable_token_persistence = true
}
retry_join = [
"127.0.0.1:8301",
]
- name: Create Consul Config
ansible.builtin.copy:
src: consul.hcl
dest: /etc/consul.d/consul.hcl
owner: consul
group: consul
mode: "0640"
backup: true
- name: Create Consul data dir
ansible.builtin.file:
path: /var/lib/consul
state: directory
owner: consul
group: consul
mode: "0755"
- name: Start the Consul service
ansible.builtin.systemd:
name: consul
state: started
enabled: true
daemon_reload: true
failed_when: false
consul acl bootstrap¶
- name: Bootstrap ACL
ansible.builtin.command: consul acl bootstrap -format json
register: bootstrap_acl
failed_when: false
no_log: true
- name: Persist the ACL token
ansible.builtin.copy:
content: "{{ bootstrap_acl.stdout }}"
dest: /root/consul-bootstrap.json
owner: consul
group: consul
mode: "0600"
backup: true
when: bootstrap_acl.rc == 0
- name: Extract the token from bootstrap json
ansible.builtin.set_fact:
bootstrap_token: "{{ bootstrap_acl.stdout | from_json | json_query('SecretID') }}"
no_log: true
when: bootstrap_acl.rc == 0
- name: Read consul bootstrap json
ansible.builtin.slurp:
src: /root/consul-bootstrap.json
register: bootstrap_token_file
failed_when: false
when: bootstrap_acl.rc != 0
- name: Extract the token from bootstrap json
ansible.builtin.set_fact:
bootstrap_token: "{{ bootstrap_token_file['content'] | b64decode | from_json | json_query('SecretID') }}"
no_log: true
when: bootstrap_acl.rc != 0
Agent Policy¶
The Consul agent running behind systemd service needs to have node:write
permission to register itself to the cluster. This ACL policy will grant that access6.
We'd then create and provide a token from such a policy in the config directory7.
node_prefix "" {
policy = "write"
}
service_prefix "" {
policy = "read"
}
- name: Create consul ACL policy for agent with node write permission
community.general.consul_policy:
name: agent-node
token: "{{ bootstrap_token }}"
rules: "{{ lookup('ansible.builtin.file', 'agent-policy.hcl') }}"
state: present
- name: Create agent role
community.general.consul_role:
name: agent
token: "{{ bootstrap_token }}"
policies:
- name: agent-node
- name: Create agent token
community.general.consul_token:
token: "{{ bootstrap_token }}"
state: present
roles:
- name: agent
register: agent_token
no_log: true
- name: Persist the agent token configuration
ansible.builtin.copy:
content: |
acl {
tokens {
agent = "{{ agent_token.token.SecretID }}"
}
}
dest: /etc/consul.d/agent-token.hcl
owner: consul
group: consul
mode: "0600"
backup: true
no_log: true
when: agent_token.token is defined and agent_token.token.SecretID is defined
Since the systemd consul.service
is running with consul agent -config-dir=/etc/consul.d/
, we can place any number of files in that directory and still get the combined result of them all.
TF State Backend ACL Policy¶
Last step. We will configure the policy and the token that we'll use in our local machine when configuring the TF state backend.
key_prefix "tf/" {
policy = "write"
}
session_prefix "" {
policy = "write"
}
- name: Create ACL for tf state backend
community.general.consul_policy:
name: tf-state-backend
token: "{{ bootstrap_token }}"
rules: "{{ lookup('ansible.builtin.file', 'tofu-policy.hcl') }}"
state: present
- name: Create tf state backend role
community.general.consul_role:
name: tf-state-backend
token: "{{ bootstrap_token }}"
policies:
- name: tf-state-backend
- name: Create tf state backend token
community.general.consul_token:
token: "{{ bootstrap_token }}"
state: present
roles:
- name: tf-state-backend
register: tf_state_backend_token
no_log: true
- name: Create tempfile
ansible.builtin.tempfile:
state: file
prefix: .tf_token_
register: tempfile
- name: Persist the tf state backend token
ansible.builtin.copy:
content: "{{ tf_state_backend_token.token.SecretID }}"
dest: "{{ tempfile.path }}"
owner: consul
group: consul
mode: "0600"
backup: true
when: tf_state_backend_token.token is defined and
tf_state_backend_token.token.SecretID is defined
That's it. After this step, we will have the token and this is how we can get the plaintext value from the temp stored file.
$ ansible -m fetch -a 'src=/tmp/.tf_token_jnq5afq6 dest=token' --become consul
$ cat token/tf-state-backend_79f7/tmp/.tf_token_jnq5afq6
29020ed6-0dee-f022-ca8b-40efa519446b
We will use that UUID value later on as the Consul HTTP token8.
Configure Load Balancer and TLS Certificate¶
We are using HAProxy with Certbot for the task. You may pick something simpler that has native TLS certificate retrieval, like Caddy9 or Traefik10.
global
log stdout format raw local0 info
log-send-hostname
chroot /var/lib/haproxy
stats socket /var/lib/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
maxconn 4000
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.3 no-tls-tickets
tune.ssl.default-dh-param 2048
defaults
backlog 1000
log global
mode http
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http
bind :80
acl acme_challenge path_beg /.well-known/acme-challenge/
http-request redirect scheme https code 301 unless acme_challenge !{ ssl_fc }
default_backend acme-backend
frontend https
bind :443 ssl crt /etc/haproxy/certs/ alpn h2,http/1.1
# Log HTTP headers
http-request set-var(txn.req_hdrs) req.hdrs
log-format "${HAPROXY_HTTP_LOG_FMT} req_hdrs:%{+Q}[var(txn.req_hdrs)]"
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
default_backend consul-backend
backend consul-backend
option httpchk GET /ui/
http-check expect status 200
server sv0 127.0.0.1:8500 check inter 5s
backend acme-backend
server sv0 127.0.0.1:9000 check inter 10s
---
- name: Create a self-signed tls private key
community.crypto.openssl_privatekey:
path: /var/lib/haproxy/haproxy.key
type: Ed25519
- name: Create a self-signed csr
community.crypto.openssl_csr:
path: /var/lib/haproxy/haproxy.csr
privatekey_path: /var/lib/haproxy/haproxy.key
common_name: haproxy
- name: Create a self-signed tls certificate
community.crypto.x509_certificate:
path: /var/lib/haproxy/haproxy.crt
csr_path: /var/lib/haproxy/haproxy.csr
privatekey_path: /var/lib/haproxy/haproxy.key
provider: selfsigned
- name: Check for any existing TLS certificate
ansible.builtin.find:
paths: /etc/haproxy/certs
file_type: file
register: haproxy_certs
- name: Prepare the self-signed tls for haproxy
ansible.builtin.shell: >-
cat /var/lib/haproxy/haproxy.crt
/var/lib/haproxy/haproxy.key
> /etc/haproxy/certs/haproxy.pem
changed_when: false
when: haproxy_certs.matched == 0
- name: Install haproxy
ansible.builtin.dnf:
name: haproxy
state: present
- name: Create the certs dir
ansible.builtin.file:
path: /etc/haproxy/certs
state: directory
owner: haproxy
group: haproxy
mode: "0755"
- name: Configure the haproxy
ansible.builtin.copy:
src: haproxy.cfg
dest: /etc/haproxy/haproxy.cfg
owner: haproxy
group: haproxy
mode: "0644"
backup: true
- name: Allow HAProxy to connect to any port
ansible.posix.seboolean:
name: haproxy_connect_any
state: true
persistent: true
- name: Start the haproxy service
ansible.builtin.systemd:
name: haproxy
state: started
enabled: true
daemon_reload: true
And the acme-backend
is as follows:
[Unit]
Description=ACME backend
After=network.target
[Service]
Type=simple
ExecStartPre=/bin/mkdir -p /var/www/html
ExecStart=/usr/bin/python3 -m http.server 9000 -d /var/www/html
[Install]
WantedBy=multi-user.target
#!/bin/bash
set -e
certbot renew -q || echo "Certbot not renewed!"
domains=(
tofu.developer-friendly.blog
)
renew_domain_cert() {
domain=$1
cd /etc/letsencrypt/live/$domain
temp_cert=$(mktemp)
cat fullchain.pem privkey.pem >"$temp_cert"
if ! cmp -s "$temp_cert" /etc/haproxy/certs/$domain; then
mv "$temp_cert" /etc/haproxy/certs/$domain
systemctl reload haproxy
echo "Certificate updated and HAProxy reloaded."
else
echo "Certificate unchanged. No reload necessary."
fi
rm -f "$temp_cert"
}
for domain in "${domains[@]}"; do
renew_domain_cert $domain
done
---
- name: Install acme-backend.service
ansible.builtin.copy:
src: acme-backend.service
dest: /etc/systemd/system/acme-backend.service
owner: root
group: root
mode: "0644"
- name: Start acme-backend.service
ansible.builtin.systemd:
name: acme-backend
state: started
enabled: true
daemon_reload: true
- name: Install certbot
ansible.builtin.dnf:
name: certbot
state: present
- name: Ensure webroot dir exists
ansible.builtin.file:
path: /var/www/html
state: directory
owner: root
group: root
mode: "0755"
- name: Fetch tls certificate with webroot
ansible.builtin.command: >-
certbot certonly
--webroot -w /var/www/html
-d tofu.developer-friendly.blog
--non-interactive
--agree-tos
--email admin@developer-friendly.blog
changed_when: false
- name: Prepare the script to renew haproxy certs
ansible.builtin.copy:
src: prepare-haproxy-cert.sh
dest: /etc/cron.weekly/prepare-haproxy-cert.sh
owner: root
group: root
mode: "0755"
- name: Prepare the tls certificate for haproxy
ansible.builtin.command: /etc/cron.weekly/prepare-haproxy-cert.sh
changed_when: false
- name: Remove the self-signed dummy TLS certificate
ansible.builtin.file:
path: /etc/haproxy/certs/haproxy.pem
state: absent
Ansible Playbook¶
To wire these all up, we need three important pieces:
---
auth_source: auto
conditional_groups:
consul: computer_name == 'tf-state-backend'
exclude_host_filters:
- powerstate != 'running'
include_vm_resource_groups:
- rg-tf-state-backend
plugin: azure.azcollection.azure_rm
[defaults]
become = false
gather_facts = false
host_key_checking = false
interpreter_python = auto_silent
inventory = inventory
private_key_file = /tmp/key
remote_user = devblog
roles_path = roles
verbosity = 2
- name: Bootstrap Consul
hosts: consul
gather_facts: false
become: true
roles:
- firewall
- consul
- haproxy
- acme
And for reproducibility, here are the requirements:
To run this playbook, we need to be authenticated to Azure API. We are using az login
for that12.
export ARM_SUBSCRIPTION_ID="00000000-0000-0000-0000-000000000000"
export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000"
ansible-playbook playbook.yml
Verify the Setup¶
At this very last step, we ensure that everything is working as expected:
terraform {
required_providers {
null = {
source = "hashicorp/null"
version = "< 4"
}
}
required_version = "< 2"
}
terraform {
backend "consul" {
address = "https://tofu.developer-friendly.blog"
path = "tf/verify-state-backend"
scheme = "https"
# token = "<CONSUL_HTTP_TOKEN>"
}
}
resource "null_resource" "this" {
provisioner "local-exec" {
command = "echo 'Terraform state backend configured with Consul'"
}
}
Remember that Consul token we fetched with Ansible ad-hoc command?
We'll use that here:
export CONSUL_HTTP_TOKEN="29020ed6-0dee-f022-ca8b-40efa519446b"
terragrunt init -upgrade
terragrunt plan -out tfplan
terragrunt apply tfplan
And it works flawlessly!



Conclusion¶
In this blog post we have provided a working example of setting up a Consul server as the OpenTofu state backend.
While this approach may not withstand the HA requirement of some of the big organizations with massive teams, it works descently well for small startups.
It provides a remote backend for TF files with state locking.
If you're working in Infrastructure as Code in a team setup, you may wanna either use one of the available hosted solutions for state backend or build up your own.
If you choose the latter, this blog post will serve you right to prepare and lunch a minimal setup with no glitch.
Until next time , ciao
& happy coding!
See any typos? This blog is opensource. Consider opening a PR.
Subscribe to Newsletter Subscribe to RSS Feed
Share on Share on Share on Share on
-
https://github.com/gruntwork-io/terragrunt/releases/tag/v0.77.14 ↩
-
https://developer.hashicorp.com/consul/docs/security/acl/acl-policies ↩
-
https://developer.hashicorp.com/consul/docs/agent/config/cli-flags#_config_dir ↩
-
https://developer.hashicorp.com/vault/docs/configuration/storage/consul ↩
-
https://docs.ansible.com/ansible/latest/collections/azure/azcollection/azure_rm_inventory.html ↩
-
https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli ↩