Security
Secrets Management at Scale with External Secrets Operator
Transform your Kubernetes secrets management with External Secrets Operator. Discover how to sync secrets from AWS, Vault, and more whilst maintaining security and operational control at scale.
Managing secrets at scale across multiple environments, applications, and cloud providers presents some awkward operational problems. Traditional approaches often lead to secrets sprawl, inconsistent access patterns, and more sensitive values ending up in places they should never live. External Secrets Operator (ESO) gives you a Kubernetes-native way to centralise secret retrieval whilst keeping consumption inside familiar Kubernetes workflows.
ESO synchronises secrets from systems such as AWS Secrets Manager, HashiCorp Vault, Azure Key Vault, and Google Secret Manager into Kubernetes secrets. That removes the need to bake credentials into images or hard-code them into configuration, whilst still enabling automated rotation and centralised access control.
Getting Started with External Secrets Operator
Installing ESO with Helm is straightforward, and the chart gives you enough control to start from a sensible production baseline:
# Add the External Secrets Operator Helm repository
helm repo add external-secrets https://charts.external-secrets.io
helm repo update
# Create a dedicated namespace
kubectl create namespace external-secrets-system
# Install with a production-oriented baseline
helm install external-secrets external-secrets/external-secrets \
--namespace external-secrets-system \
--set installCRDs=true \
--set replicaCount=2 \
--set resources.limits.cpu=200m \
--set resources.limits.memory=256Mi \
--set resources.requests.cpu=100m \
--set resources.requests.memory=128Mi \
--set serviceMonitor.enabled=true \
--set webhook.replicaCount=2
For production environments, keep your Helm values in version control and apply them through your normal GitOps or delivery workflow.
Configuring AWS Secrets Manager Integration
AWS integration is a good example of how ESO keeps the control flow clear. On EKS, using IAM Roles for Service Accounts (IRSA) remains the cleanest way to grant ESO access without distributing long-lived AWS keys.
IAM policy for ESO
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret"],
"Resource": "arn:aws:secretsmanager:*:ACCOUNT-ID:secret:app/*"
}
]
}
SecretStore configuration
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-secrets-manager
namespace: production
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: external-secrets-sa
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-secrets-sa
namespace: production
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT-ID:role/external-secrets-role
The SecretStore acts as the configuration hub for a given backend. Once it is defined, multiple ExternalSecret resources can reference it, which keeps the setup reusable across teams and namespaces.
Understanding ExternalSecrets
The ExternalSecret resource is where declarative secret management becomes useful in practice. It defines what to fetch, how often to refresh it, and what Kubernetes secret should be created from the remote values.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: production
spec:
refreshInterval: 300s
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: postgres-credentials
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: database/postgres/production
property: username
- secretKey: password
remoteRef:
key: database/postgres/production
property: password
- secretKey: host
remoteRef:
key: database/postgres/production
property: host
That creates a normal Kubernetes secret called postgres-credentials. From the application’s point of view it behaves like any other secret, but the retrieval, refresh, and ownership are handled by ESO.
Consuming Secrets in Applications
Once ESO creates the secret, applications consume it using standard Kubernetes patterns:
containers:
- name: app
image: myapp:latest
env:
- name: DATABASE_HOST
valueFrom:
secretKeyRef:
name: postgres-credentials
key: host
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
That consistency matters. Teams can keep using familiar deployment patterns without caring where the secret originally came from.
Automatic Updates and Pod Restarts
ESO keeps the target secret synchronised, but that does not automatically mean your application has re-read it. Mounted secret volumes update automatically, but environment variables sourced from secrets do not refresh inside already-running pods.
This is where pairing ESO with a restart controller such as Reloader becomes useful:
helm repo add stakater https://stakater.github.io/stakater-charts
helm install reloader stakater/reloader --namespace external-secrets-system
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-application
namespace: production
annotations:
reloader.stakater.com/auto: 'true'
spec:
template:
spec:
containers:
- name: app
image: myapp:latest
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
ESO and Reloader together give you a practical operational pattern: ESO keeps the secret current, and Reloader ensures pods are restarted when updates matter.
Advanced Secret Transformation
One of ESO’s strongest features is templating. You can combine several remote values into a more useful application-facing configuration bundle:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: application-config
namespace: production
spec:
refreshInterval: 300s
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: app-configuration
creationPolicy: Owner
template:
type: Opaque
engineVersion: v2
data:
config.yaml: |
database:
url: "postgresql://{{ .db_username }}:{{ .db_password }}@{{ .db_host }}:5432/{{ .db_name }}"
pool_size: 20
redis:
url: "redis://{{ .redis_password }}@{{ .redis_host }}:6379/0"
.env.production: |
DATABASE_URL=postgresql://{{ .db_username }}:{{ .db_password }}@{{ .db_host }}:5432/{{ .db_name }}
REDIS_URL=redis://{{ .redis_password }}@{{ .redis_host }}:6379/0
data:
- secretKey: db_username
remoteRef:
key: database/postgres/production
property: username
- secretKey: db_password
remoteRef:
key: database/postgres/production
property: password
- secretKey: db_host
remoteRef:
key: database/postgres/production
property: host
- secretKey: db_name
remoteRef:
key: database/postgres/production
property: database
- secretKey: redis_password
remoteRef:
key: cache/redis/production
property: password
- secretKey: redis_host
remoteRef:
key: cache/redis/production
property: host
That lets you keep the source of truth external whilst still delivering application-ready config to workloads inside the cluster.
Vault Integration
For teams already using Vault, ESO gives you a straightforward way to consume static or dynamic secrets without pushing Vault access logic down into every application:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
namespace: production
spec:
provider:
vault:
server: 'https://vault.company.com'
path: 'secret'
version: 'v2'
auth:
jwt:
path: 'jwt'
serviceAccountRef:
name: vault-auth-sa
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: dynamic-db-credentials
namespace: production
spec:
refreshInterval: 300s
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: dynamic-postgres-creds
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: database/creds/readonly
property: username
- secretKey: password
remoteRef:
key: database/creds/readonly
property: password
If your backend supports dynamic credentials, ESO makes it easier to consume them operationally without abandoning standard Kubernetes secret patterns.
Scaling ESO
For clusters with large numbers of ExternalSecret resources, the default settings may need some tuning:
helm upgrade external-secrets external-secrets/external-secrets \
--namespace external-secrets-system \
--set replicaCount=3 \
--set resources.limits.cpu=500m \
--set resources.limits.memory=512Mi \
--set webhook.replicaCount=3 \
--set concurrent=15 \
--set extraArgs="{--enable-flood-protection=true,--max-concurrent-reconciles=10}"
If you want scaling to follow workload, enabling autoscaling is also sensible:
helm upgrade external-secrets external-secrets/external-secrets \
--set autoscaling.enabled=true \
--set autoscaling.minReplicas=2 \
--set autoscaling.maxReplicas=10 \
--set autoscaling.targetCPUUtilizationPercentage=70
Security Best Practices
ESO should still be treated like a privileged platform component. Apply normal controls around it:
- grant the smallest possible permissions in the backing secret store
- use namespace scoping where practical
- restrict outbound network paths
- expose metrics for operator health and sync failures
- keep secret ownership obvious so application teams know what depends on what
A simple network policy baseline might look like this:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: external-secrets-network-policy
namespace: external-secrets-system
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: external-secrets
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 8080
egress:
- to: []
ports:
- protocol: UDP
port: 53
- to: []
ports:
- protocol: TCP
port: 443
Conclusion
External Secrets Operator gives platform teams a practical way to centralise secret retrieval without making secret consumption awkward for application teams. It fits well with Kubernetes, works across common secret backends, and supports the kind of operational patterns you need once clusters and environments start to multiply.
If you need secrets management that scales without copying credentials into every pipeline, manifest, or application repository, ESO is one of the cleanest building blocks available.