Infrastructure as Code
Testing OpenTofu / Terraform
Explore tools and practical strategies for integrating robust testing into your OpenTofu and Terraform workflows.
Infrastructure as Code has transformed how we build and operate platforms, but the move to code also means we need a more disciplined testing approach. OpenTofu and Terraform now both support native testing workflows, which removes a lot of the historical need for external test harnesses just to validate module behaviour.
This guide focuses on practical testing patterns that work across both platforms, with an emphasis on fast feedback, clearer validation, and test structure that teams can actually maintain.
The Evolution of Infrastructure Testing
Infrastructure testing has matured quickly. Earlier approaches often relied on external tools such as Terratest, which are still useful in some scenarios but add more languages, more tooling, and more pipeline complexity. Native testing changes that by letting teams stay inside HCL for many common testing cases.
That matters because it lowers the barrier to doing the right thing. If tests look and feel like the rest of your IaC, teams are much more likely to keep them up to date.
Why Native Testing Matters
Traditional testing approaches often meant:
- learning an additional language such as Go
- building separate setup and teardown logic
- maintaining extra CI dependencies
- creating a second workflow around the code you actually care about
The native frameworks improve that by giving you:
- tests written in HCL
- built-in state management and cleanup
- support for both
planandapplytesting modes - a workflow that works across OpenTofu and Terraform
Getting Started with a First Test
Consider a small module that creates an S3 bucket with some straightforward validation and versioning logic:
variable "environment" {
description = "The environment name"
type = string
validation {
condition = can(regex("^(dev|staging|prod)$", var.environment))
error_message = "Environment must be dev, staging, or prod."
}
}
variable "project_name" {
description = "The project name"
type = string
validation {
condition = length(var.project_name) > 2 && length(var.project_name) < 20
error_message = "Project name must be between 3 and 19 characters."
}
}
resource "aws_s3_bucket" "main" {
bucket = "${var.project_name}-${var.environment}-data"
}
resource "aws_s3_bucket_versioning" "main" {
bucket = aws_s3_bucket.main.id
versioning_configuration {
status = var.environment == "prod" ? "Enabled" : "Suspended"
}
}
output "bucket_name" {
value = aws_s3_bucket.main.id
}
output "versioning_enabled" {
value = aws_s3_bucket_versioning.main.versioning_configuration[0].status == "Enabled"
}
You can test that module with a .tftest.hcl file:
variables {
environment = "dev"
project_name = "myapp"
}
run "valid_bucket_naming" {
command = plan
assert {
condition = aws_s3_bucket.main.bucket == "myapp-dev-data"
error_message = "Bucket name should follow the pattern project-environment-data."
}
}
run "versioning_disabled_for_dev" {
command = plan
variables {
environment = "dev"
project_name = "testproject"
}
assert {
condition = aws_s3_bucket_versioning.main.versioning_configuration[0].status == "Suspended"
error_message = "Versioning should be suspended for dev."
}
}
run "versioning_enabled_for_prod" {
command = plan
variables {
environment = "prod"
project_name = "testproject"
}
assert {
condition = aws_s3_bucket_versioning.main.versioning_configuration[0].status == "Enabled"
error_message = "Versioning should be enabled for prod."
}
}
Testing Validation Logic
Validation paths are worth testing explicitly. One of the strengths of the native frameworks is that they let you assert failure conditions in a structured way:
run "invalid_environment_fails" {
command = plan
variables {
environment = "invalid"
project_name = "myapp"
}
expect_failures = [
var.environment,
]
}
run "project_name_too_short_fails" {
command = plan
variables {
environment = "dev"
project_name = "xy"
}
expect_failures = [
var.project_name,
]
}
That gives you quick coverage for common edge cases without spinning up real infrastructure.
Mocking and Overrides
For more advanced scenarios, mocking and overrides help you test logic without paying the cost of provisioning every resource:
mock_provider "aws" {
alias = "mock"
}
run "integration_test_with_mocks" {
providers = {
aws = aws.mock
}
variables {
environment = "prod"
project_name = "integration-test"
}
assert {
condition = aws_s3_bucket.main.bucket == "integration-test-prod-data"
error_message = "Bucket naming integration failed."
}
assert {
condition = output.versioning_enabled == true
error_message = "Prod environment should have versioning enabled."
}
}
Overrides are also useful when you want to force a particular state and verify downstream logic:
run "test_with_overrides" {
override_resource {
target = aws_s3_bucket_versioning.main
values = {
versioning_configuration = [{
status = "Enabled"
}]
}
}
variables {
environment = "dev"
project_name = "override-test"
}
assert {
condition = output.versioning_enabled == true
error_message = "Override should force versioning to be enabled."
}
}
Unit vs Integration Testing
The simplest way to think about test strategy is:
- use
command = planfor fast, low-cost unit tests - use
applywhen you genuinely need end-to-end validation
Plan-mode tests are ideal for naming, validation, counts, generated values, and conditional logic:
run "unit_test_bucket_config" {
command = plan
assert {
condition = aws_s3_bucket.main.bucket != ""
error_message = "Bucket name must not be empty."
}
}
Apply-mode tests are better for validating real creation flows:
run "integration_test_full_stack" {
variables {
environment = "dev"
project_name = "integration"
}
assert {
condition = aws_s3_bucket.main.id != ""
error_message = "Bucket should be created successfully."
}
}
Cross-Platform Compatibility
The testing syntax is largely the same across OpenTofu and Terraform:
- Terraform uses
.tftest.hcl - OpenTofu supports
.tftest.hcland.tofutest.hcl - sticking to
.tftest.hclis the easiest way to stay compatible
Running tests is straightforward:
# Terraform
terraform test
# OpenTofu
tofu test
# Filter a specific test file
terraform test -filter=specific_test.tftest.hcl
tofu test -filter=specific_test.tftest.hcl
Organising a Test Suite
Keep the suite easy to scan and predictable:
.
├── main.tf
├── variables.tf
├── outputs.tf
└── tests/
├── unit_bucket_config.tftest.hcl
├── validation_rules.tftest.hcl
├── integration_stack.tftest.hcl
└── helpers/
Clear naming helps as the suite grows:
validation_*.tftest.hclunit_*.tftest.hclintegration_*.tftest.hcledge_case_*.tftest.hcl
Best Practices
The testing pattern that usually works best is:
- validate inputs and constraints first
- cover the core module logic with plan-mode tests
- add a small number of apply-mode checks for critical paths
- keep tests independent, readable, and specific
For example:
run "valid_input_succeeds" {
variables {
environment = "prod"
}
assert {
condition = aws_s3_bucket.main.id != ""
error_message = "Valid input should create the bucket."
}
}
run "invalid_input_fails" {
variables {
environment = "invalid"
}
expect_failures = [var.environment]
}
Good error messages matter as much here as they do in application tests. If a test fails in CI, the person reading it should immediately understand why.
Conclusion
Native testing in OpenTofu and Terraform makes infrastructure testing far easier to adopt than it used to be. You can stay in HCL, keep test logic close to the module, and cover a wide range of behaviour before anything reaches a real environment.
For most teams, the right starting point is simple: add plan-mode validation tests first, then expand into a few carefully chosen integration checks. That approach gives you useful confidence quickly without turning the test suite into a project of its own.