Infrastructure as Code
Up and Running with the Kubernetes Terraform Provider
Essential guide to managing Kubernetes resources with OpenTofu or Terraform.
The Kubernetes Terraform provider lets you manage Kubernetes resources with the same Infrastructure as Code workflow you use for cloud infrastructure. That gives you shared lifecycle control, dependency tracking, and a single delivery pattern for a useful slice of cluster resources.
It is not the right tool for every workload, but it is a strong fit when you want infrastructure and Kubernetes objects to be described and applied together.
When to Use the Kubernetes Provider
The Kubernetes provider is particularly useful when you want to:
- manage infrastructure and a subset of cluster resources in one workflow
- use Terraform’s dependency graph to stage cluster setup
- keep tooling consistent across the estate
- manage infrastructure-like Kubernetes resources such as namespaces, services, ingress, or platform components
For large application releases, dedicated tools such as Helm, Kustomize, or Argo CD are often a better fit.
Authentication and Setup
For managed Kubernetes services, cloud-native authentication is usually the cleanest option. On EKS that often means using aws_eks_cluster and aws_eks_cluster_auth data sources to configure the provider.
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.16.1"
}
}
}
data "aws_eks_cluster" "cluster" {
name = var.cluster_name
}
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster_name
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", var.cluster_name]
}
}
You can also point at a local kubeconfig when working interactively:
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-cluster-context"
}
If Terraform or OpenTofu runs inside the cluster, in-cluster authentication can also work well for tightly scoped automation.
Core Kubernetes Resources
The provider works well for core resource definitions such as namespaces, deployments, and services.
Namespace
resource "kubernetes_namespace" "app" {
metadata {
name = "my-application"
labels = {
environment = "production"
managed-by = "terraform"
}
}
}
Deployment
resource "kubernetes_deployment" "app" {
metadata {
name = "nginx-deployment"
namespace = kubernetes_namespace.app.metadata[0].name
labels = {
app = "nginx"
}
}
spec {
replicas = 3
selector {
match_labels = {
app = "nginx"
}
}
template {
metadata {
labels = {
app = "nginx"
}
}
spec {
container {
name = "nginx"
image = "nginx:1.26"
port {
container_port = 80
}
resources {
limits = {
cpu = "500m"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "128Mi"
}
}
liveness_probe {
http_get {
path = "/"
port = 80
}
initial_delay_seconds = 30
period_seconds = 10
}
readiness_probe {
http_get {
path = "/"
port = 80
}
initial_delay_seconds = 5
period_seconds = 5
}
}
}
}
}
}
Service
resource "kubernetes_service" "app" {
metadata {
name = "nginx-service"
namespace = kubernetes_namespace.app.metadata[0].name
}
spec {
selector = {
app = "nginx"
}
port {
port = 80
target_port = 80
protocol = "TCP"
}
type = "ClusterIP"
}
}
Configuration Management
You can also manage ConfigMaps and Secrets, although secrets deserve a bit more care because they still become part of your Terraform workflow and state handling.
resource "kubernetes_config_map" "app_config" {
metadata {
name = "app-config"
namespace = kubernetes_namespace.app.metadata[0].name
}
data = {
"app.properties" = <<-EOF
database.host=db.example.com
database.port=5432
log.level=INFO
EOF
"config.json" = jsonencode({
debug = false
timeout = 30
retries = 3
})
}
}
If you do manage secrets with the provider, make sure your backend and access controls are designed with that in mind.
Scaling and Traffic Management
Autoscaling and ingress are both manageable through the provider.
Horizontal Pod Autoscaler
resource "kubernetes_horizontal_pod_autoscaler_v2" "app_hpa" {
metadata {
name = "nginx-hpa"
namespace = kubernetes_namespace.app.metadata[0].name
}
spec {
min_replicas = 2
max_replicas = 10
scale_target_ref {
api_version = "apps/v1"
kind = "Deployment"
name = kubernetes_deployment.app.metadata[0].name
}
metric {
type = "Resource"
resource {
name = "cpu"
target {
type = "Utilization"
average_utilization = 70
}
}
}
metric {
type = "Resource"
resource {
name = "memory"
target {
type = "Utilization"
average_utilization = 80
}
}
}
}
}
Ingress
resource "kubernetes_ingress_v1" "app_ingress" {
metadata {
name = "app-ingress"
namespace = kubernetes_namespace.app.metadata[0].name
annotations = {
"kubernetes.io/ingress.class" = "nginx"
"cert-manager.io/cluster-issuer" = "letsencrypt-prod"
"nginx.ingress.kubernetes.io/rewrite-target" = "/"
}
}
spec {
tls {
hosts = ["app.example.com"]
secret_name = "app-tls"
}
rule {
host = "app.example.com"
http {
path {
path = "/"
path_type = "Prefix"
backend {
service {
name = kubernetes_service.app.metadata[0].name
port {
number = 80
}
}
}
}
}
}
}
}
Migrating from YAML to Terraform
If you already have Kubernetes manifests, migration tools can help you bootstrap the first pass of a provider-based configuration.
Two common approaches are:
k2tf, which generates native Terraform Kubernetes resourcestfk8s, which generateskubernetes_manifestblocks from existing YAML
k2tf is usually the better option when you want typed resources and more idiomatic Terraform:
go install github.com/sl1pm4t/k2tf@latest
tfk8s is useful when you want a faster, more literal translation:
brew install tfk8s
For example:
# Render Helm or Kustomize output first
kustomize build overlays/production > production-manifests.yaml
# Convert with k2tf
k2tf -f production-manifests.yaml -o production.tf
# Or convert with tfk8s
tfk8s -f production-manifests.yaml -o production.tf
After conversion, you usually still want to tidy the output into variables, locals, and reusable patterns rather than keeping the generated file as-is.
Implementation Best Practices
The patterns that hold up best are:
- use cloud-provider authentication for managed clusters
- pin the provider version
- set resource requests and limits explicitly
- use namespaces to keep resources organised
- add liveness and readiness probes
- let Terraform manage dependencies through resource references
- prefer the provider for infrastructure-like cluster resources rather than complex app-release lifecycles
Important Considerations
The Kubernetes provider is useful, but it is not a full replacement for all cluster tooling:
- some Kubernetes features are still easier to manage with dedicated tools
- state can get messy if cluster changes happen frequently outside Terraform
- large application deployments are often better handled by Helm, Kustomize, or GitOps controllers
Conclusion
The Kubernetes Terraform provider is strongest when you use it where Terraform naturally adds value: cluster-adjacent resources, shared platform components, and Kubernetes objects that benefit from being tied directly to the surrounding infrastructure.
That makes it a useful tool in a broader platform workflow, even if it is not the right answer for every workload.