In my current role, I configured Terraform to manage our GitHub organisation.
As with all providers, we need need to provide credentials for authentication.
I didn’t want to use an access token, as they are tied to an individual user and will cause breakage should the user depart the organisation.
Thankfully, GitHub supports using an application for authentication.
Create the GitHub Application
The first step in the process is to create a new GitHub application.
While this can be done either in a personal account or within an organisation, I recommend doing this within the organisation.
That way, if someone leaves the organisation the application doesn’t go with.
Read more…
Managed Identities in Azure are a wonderful thing. No passwords to change, no keys to rotate.
The biggest shame is that frequently they seem to be implemented as an afterthought.
One example I recently ran into was the use of an App Service Managed Identity to pull a container from Azure Container Registry. While you can configure an App Service to pull from ACR with a Managed Identity, what the documentation doesn’t tell you is that you still need the DOCKER_REGISTRY_SERVER_USERNAME
and DOCKER_REGISTRY_SERVER_PASSWORD
App Settings to be configured on the App Service. It doesn’t matter what values you put in these, the point is they must exist. If they don’t, the container will fail to pull with a credential error.
Read more…
Terraform is a fantastic tool for Infrastructure as Code.
From the YAML-like HCL syntax (no JSON!), to importing files (linting JSON files FTW!), to retrieving the results of previous runs to link resources, Terraform has made a massive difference in my work.
However, like all technologies, it is not without its weaknesses.
Terraform uses state files to keep track of what the world looked like when it last ran, which is wonderful for identifying drift.
The default pattern is to use these state files for passing data between Terraform modules.
But this is actually an anti-pattern, for HashiCorp recommend not using remote state for passing data, in large part because to read the outputs from a state file the caller must have full access to read the entire remote state file, which include secrets they probably shouldn’t be allowed to access.
Read more…
Not everyone is privileged to be able to use Terraform Cloud for deploying their Terraform infrastructure.
This means that teams need to use their existing DevOps tooling to deploy their infrastructure via Terraform.
While I’ve seen many examples of pipelines for deploying Terraform code with various services, it felt like something was missing.
Most example pipelines were designed to just run once a code review had occurred, and often would automatically deploy the changed code without any intervention.
This wasn’t going to fly for us in a recent project.
We needed a more robust plan for deployment, one that would cater for not only deployment of the infrastructure, but an opportunity to wait for approval of a specific plan, plus checks to make sure that the newly-committed code was up to standard.
Read more…
Traffic Manager is an essential component of any resilient deployment within Azure.
Whether you have a multi-region behemoth, or simply want a simple way to activate DR instances should the primary go down, Traffic Manager has a configuration for you.
One key component of Traffic Manager is its probes—by frequently checking the status of your application, Traffic Manager can make intelligent decisions about where to direct the traffic.
As with all services, there are a specific set of IP addresses from which the probes will originate.
Microsoft even helpfully provide a Service Tag AzureTrafficManager
which is kept up-to-date with the latest IP addresses used by Traffic Manager probes.
They even tell us that this Service Tag is supported for use in Azure Firewall.
Except… that is not the whole story.
Read more…
I have recently had the pleasure (You keep using that word. I do not think it means what you think it means.) of deploying Logic App workflows on a Logic App (Standard) instance.
For those not familiar with Logic App (Standard), they are the single-tenant instance of Logic Apps.
They provide the ability to host your workflows within a virtual network, something that cannot be done with a consumption Logic App.
Under the hood, standard Logic Apps are a completely different beast to consumption Logic Apps.
Consumption apps can only have a single Workflow in the app (which makes sense when you consider you also pay by the execution), while standard apps are deployed into an App Service plan and can therefore have multiple workflows in a single Logic App.
Read more…