|7 min read

Terraform 0.13: Provider Requirements Done Right

Terraform 0.13 introduces proper provider source addresses and module-level provider requirements, fixing one of the most persistent pain points in infrastructure as code

Terraform 0.13 shipped last month, and the headline feature is something that should have existed from the beginning: proper provider source addresses. This sounds mundane. It is not. It fixes a fundamental design flaw in how Terraform discovers and manages providers, and it makes the entire module ecosystem significantly more reliable.

If you have ever written a Terraform module that uses a third-party provider and had someone on your team spend an afternoon figuring out why terraform init was failing, you understand why this matters.

The Problem With Providers Before 0.13

In Terraform 0.12 and earlier, providers were identified by a short name: aws, google, azurerm, datadog. When you ran terraform init, Terraform would look up that name in the HashiCorp provider registry and download the corresponding binary. There was an implicit assumption that all providers lived in the HashiCorp namespace.

This created several problems:

Third-party providers had no first-class distribution mechanism. If you wrote a custom provider for an internal system, you had to distribute the binary manually and place it in a specific directory on every machine that ran Terraform. There was no equivalent of npm install or pip install for providers outside the official registry.

Module authors could not declare provider dependencies. A module could use the aws provider, but it could not specify which version of the provider it required or where that provider should be sourced from. The consumer of the module was responsible for configuring the provider, and if the versions were incompatible, the error messages were unhelpful.

Provider version conflicts in complex configurations were common. When you composed multiple modules, each might have different implicit assumptions about provider versions. Resolving these conflicts was manual and error-prone.

What 0.13 Changes

Terraform 0.13 introduces a required_providers block with full source addresses:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
    datadog = {
      source  = "DataDog/datadog"
      version = ">= 2.7"
    }
    custom = {
      source  = "corp/custom-provider"
      version = "1.2.0"
    }
  }
}

The source address follows a three-part convention: <hostname>/<namespace>/<type>. The default hostname is registry.terraform.io, so hashicorp/aws resolves to registry.terraform.io/hashicorp/aws. But you can point to any registry, including a private one hosted internally.

This is a significant architectural improvement:

Providers are now namespaced. Two different organizations can publish providers with the same type name without conflict. The full source address disambiguates them.

Private registries are first-class. You can host a Terraform registry internally (Terraform Enterprise includes one, and there are open-source implementations) and reference providers from it using the same syntax. This unblocks the use case of custom providers for internal APIs and systems.

Module authors can declare provider requirements. When you write a module, you can specify exactly which providers it needs, which versions, and where to find them. The consumer does not need to guess or reverse-engineer the dependencies.

Module Improvements

The module system also received important improvements in 0.13. The most impactful is the ability to use count, for_each, and depends_on with modules.

Before 0.13, if you wanted to conditionally include a module, you had to use workarounds: setting all the module's resource counts to zero via a variable, or wrapping the module in a script that generated or omitted the module block based on configuration. These patterns were fragile and hard to reason about.

Now you can write:

module "monitoring" {
  count  = var.enable_monitoring ? 1 : 0
  source = "./modules/monitoring"

  cluster_name = var.cluster_name
  alert_email  = var.alert_email
}

And with for_each:

module "microservice" {
  for_each = var.services
  source   = "./modules/microservice"

  name     = each.key
  image    = each.value.image
  replicas = each.value.replicas
}

This eliminates an entire category of Terraform anti-patterns. Dynamic infrastructure composition, where the set of resources varies based on configuration, is now expressible directly in HCL without resorting to code generation or conditional gymnastics.

The Upgrade Path

Upgrading from 0.12 to 0.13 is more straightforward than the 0.11 to 0.12 transition was. HashiCorp provides a terraform 0.13upgrade command that automatically rewrites your configuration to use the new required_providers syntax.

In practice, the upgrade involves:

  1. Run terraform 0.13upgrade in each configuration directory.
  2. Review the generated required_providers blocks.
  3. Run terraform init to download providers from their new source addresses.
  4. Run terraform plan to verify that no resources will change.
  5. Commit the updated configurations.

For our infrastructure codebase at the company, which spans roughly 200 Terraform configurations across multiple AWS accounts, the upgrade took about a week. Most of that time was spent on CI/CD pipeline updates and testing, not on the actual configuration changes.

The one gotcha we encountered: some community providers had changed their registry namespaces. A provider that was previously referenced as just pagerduty now needed to be PagerDuty/pagerduty. The 0.13upgrade command handles the well-known cases, but we had a few custom and less-popular providers that required manual intervention.

State File Compatibility

Terraform 0.13 updates the state file format to record provider source addresses. This means once you upgrade a state file to 0.13 format, you cannot go back to 0.12 without manual state manipulation. In practice, this is not a problem as long as you upgrade environments in order (dev, then staging, then production) and do not need to rollback the Terraform binary version.

We keep our Terraform version pinned in CI/CD (via tfenv and a .terraform-version file) and upgrade all environments within the same sprint. This avoids version skew between the binary, the configuration syntax, and the state file format.

The Bigger Picture: Infrastructure as Code Maturity

Terraform's evolution from 0.11 to 0.13 mirrors the maturation pattern I see in most infrastructure tooling:

Phase one: make it work. Terraform 0.11 and earlier were focused on expressiveness. Can you describe your infrastructure in code? Can you plan and apply changes? Can you manage state? The answer was yes, with rough edges.

Phase two: make it right. Terraform 0.12 introduced first-class expressions, a proper type system, and eliminated the need for most string interpolation hacks. The language became something you could reason about rather than fight with.

Phase three: make it scalable. Terraform 0.13 addresses the problems that emerge when Terraform is used across large organizations with many teams, many providers, and complex module compositions. Provider requirements, module iteration, and private registries are features that matter at scale.

This is the same trajectory that programming languages follow: correctness first, then ergonomics, then ecosystem tooling. Terraform is entering the ecosystem phase, where the focus shifts from the core language to the package management, distribution, and governance infrastructure around it.

What We Are Doing With It

With 0.13's module improvements, we have been refactoring our infrastructure codebase to use module composition more aggressively.

Previously, we had large, monolithic Terraform configurations for each environment. A single apply would create VPCs, ECS clusters, RDS instances, S3 buckets, IAM roles, and everything else. These configurations were thousands of lines long and had blast radii that made engineers nervous.

Now we are decomposing into smaller, composable modules with for_each for dynamic instantiation. Each team gets a module that describes their service's infrastructure, and the top-level configuration composes those modules based on a YAML inventory. Changes to one team's infrastructure do not affect another's. The blast radius shrinks. The plan output is readable. Engineers are less nervous.

This is the direction infrastructure as code needs to go: from scripts that describe infrastructure to programs that compose it. Terraform 0.13 does not complete that journey, but it removes several of the barriers that were blocking progress.

The release is not flashy. It will not generate conference keynotes or trending tweets. But for anyone managing Terraform at scale, it is one of the most impactful releases in the project's history. Sometimes the most important changes are the ones that make the boring parts work better.

Share: