I recently had the privilege of opening the New Zealand GitHub User Group with a presentation on using Azure Container Apps for self-hosted GitHub Actions Runners. The recording of the session is available on YouTube.
Have you ever had one of those days where your troubleshooting resembles doing the foxtrot? Forward. Sideways. Backwards. Round in Circles. I had one of those last week.
My current customer is using Ansible for orchestration. As part of proving out the capabilities, and working through the best way to make it all work, I needed to set up my local environment.
Microsoft have make controlling routing within a Virtual WAN much easier thanks to a combination of Routing Intent and Route Maps (in preview at time of writing). Route Maps work in the same manner (at least theoretically) to route map configuratons network administrators are used to in on-premises equipment.
In testing tehm out for a customer, I ran straight into a problem. I like to keep things tidy, and when I looked at the output of the outbound route map configuration for a site to site VPN with AWS, I didn’t like that it showed all my AWS routes being published back out to the VPN. So I configured a route map rule to drop reflected routes.
Managed Identities in Azure are a wonderful thing. No passwords to change, no keys to rotate.
The biggest shame is that frequently they seem to be implemented as an afterthought.
One example I recently ran into was the use of an App Service Managed Identity to pull a container from Azure Container Registry. While you can configure an App Service to pull from ACR with a Managed Identity, what the documentation doesn’t tell you is that you still need the DOCKER_REGISTRY_SERVER_USERNAME and DOCKER_REGISTRY_SERVER_PASSWORD App Settings to be configured on the App Service. It doesn’t matter what values you put in these, the point is they must exist. If they don’t, the container will fail to pull with a credential error.
Terraform is a fantastic tool for Infrastructure as Code.
From the YAML-like HCL syntax (no JSON!), to importing files (linting JSON files FTW!), to retrieving the results of previous runs to link resources, Terraform has made a massive difference in my work.
However, like all technologies, it is not without its weaknesses.
Terraform uses state files to keep track of what the world looked like when it last ran, which is wonderful for identifying drift.
The default pattern is to use these state files for passing data between Terraform modules.
But this is actually an anti-pattern, for HashiCorp recommend not using remote state for passing data, in large part because to read the outputs from a state file the caller must have full access to read the entire remote state file, which include secrets they probably shouldn’t be allowed to access.