Deep Dive: The Definitive Guide to Securely Connecting EKS Pods to AWS with IRSA
In any cloud-native environment, a fundamental question quickly arises: “How do I give my application running in a container secure access to cloud services?” For applications running on Amazon EKS that need to talk to AWS APIs, this question is critical. The old way—creating an IAM user, generating long-lived access keys, and embedding them as secrets in a pod—is a security anti-pattern waiting to happen.
The modern, secure, and cloud-native answer is IAM Roles for Service Accounts (IRSA).
Through the process of productionising my cost-tracker
application, I went on a deep-dive journey through the intricacies of setting up IRSA. It was a process filled with common pitfalls and “aha!” moments that perfectly illustrate the real-world challenges of cloud security. This post is a distillation of that journey: part story, part technical playbook.
What is IRSA and Why is it a Game-Changer?
At its core, IRSA allows you to trade static, long-lived credentials for dynamic, short-lived, and automatically-renewed credentials. It cleverly links a Kubernetes-native identity (ServiceAccount
) with an AWS identity (IAM Role
).
Here’s how it works:
-
Your EKS cluster is configured with an OpenID Connect (OIDC) provider, which IAM can trust.
-
You create an IAM Role with a special “Trust Relationship” policy that says, “I trust the EKS OIDC provider, and I will only allow a specific Kubernetes
ServiceAccount
to assume me.” -
You create that
ServiceAccount
in Kubernetes and link it to the IAM Role using an annotation. -
When your pod starts with this
ServiceAccount
, a webhook in EKS automatically injects a special identity token. -
The AWS SDK inside your application transparently uses this token to call the AWS Security Token Service (STS) and assume the IAM role, receiving secure, temporary credentials in return.
The benefits are immense:
-
No More Static Keys: The biggest security win. There are no access keys to be leaked or rotated.
-
Least Privilege: You can create fine-grained IAM policies for each application, ensuring a pod can only access the resources it absolutely needs.
-
Auditability: Every action is tied to the assumed IAM role, which can be clearly tracked in AWS CloudTrail.
-
Platform Native: It’s the official, recommended way to handle pod identity on EKS.
The Playbook: A Definitive Guide to Setting up IRSA
The following is a complete, end-to-end guide for deploying an application to EKS with IRSA configured. This guide has been corrected and verified for Kubernetes 1.30+ and includes all the necessary steps, from creating the cluster to the final verification.
Step 0: Create an Amazon EKS Cluster (CLI)
This step uses eksctl
to create a new, well-configured EKS cluster. The --with-oidc
flag is crucial as it automatically sets up the IAM OIDC provider for you.
# Define your cluster name and region
export CLUSTER_NAME="cost-tracker-cluster"
export CLUSTER_REGION="ap-southeast-2" # Or your preferred region
# Create the cluster using a recent Kubernetes version
eksctl create cluster \
--name ${CLUSTER_NAME} \
--region ${CLUSTER_REGION} \
--version "1.30" \
--nodegroup-name standard-workers \
--node-type t3.small \
--nodes 2 \
--nodes-min 1 \
--nodes-max 3 \
--with-oidc
Once complete, eksctl
automatically configures kubectl
to connect to your new cluster.
Step 1: Create the IAM Policy & Role for the Application
Here, we define what the application is allowed to do (the Policy) and create an identity for it (the Role) that trusts our EKS cluster.
-
Create the IAM Policy (
cost-tracker-iam-policy.json
):{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ce:GetCostAndUsage" ], "Resource": "*" } ] }
Create this policy in AWS with the command:
aws iam create-policy --policy-name CostTrackerPolicy --policy-document file://cost-tracker-iam-policy.json
Note the Policy ARN from the output.
-
Create the IAM Role with a Trust Relationship:
This step requires your AWS Account ID and your cluster’s OIDC Provider URL. Use these commands to generate a correct trust-policy.json file.
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) export OIDC_PROVIDER=$(aws eks describe-cluster --name ${CLUSTER_NAME} --region ${CLUSTER_REGION} --query "cluster.identity.oidc.issuer" --output text | sed 's|^https://||') cat <<EOF > trust-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "${OIDC_PROVIDER}:sub": "system:serviceaccount:default:cost-tracker-sa" } } } ] } EOF
Now, create the role and attach the policy:
aws iam create-role --role-name CostTrackerRole --assume-role-policy-document file://trust-policy.json aws iam attach-role-policy --role-name CostTrackerRole --policy-arn <POLICY_ARN_FROM_PREVIOUS_STEP>
Step 2: Configure and Deploy Kubernetes Resources
Now we apply our application’s configuration to the cluster.
-
kubernetes/configmap.yaml
:apiVersion: v1 kind: ConfigMap metadata: name: cost-tracker-config data: COSTTRACKER_DAYS: "30" AWS_REGION: "ap-southeast-2" # Match your cluster's region
-
kubernetes/serviceaccount.yaml:
This is the key to IRSA. The annotation links our Kubernetes ServiceAccount to the IAM Role.
apiVersion: v1 kind: ServiceAccount metadata: name: cost-tracker-sa namespace: default annotations: # This is the magic link to your IAM Role. Replace the placeholder. eks.amazonaws.com/role-arn: "<YOUR_IAM_ROLE_ARN>"
-
kubernetes/cronjob.yaml:
The final piece, telling Kubernetes to run the job with our ServiceAccount.
apiVersion: batch/v1 kind: CronJob metadata: name: cost-tracker-cronjob spec: schedule: "0 2 * * *" jobTemplate: spec: backoffLimit: 1 template: spec: serviceAccountName: cost-tracker-sa containers: - name: cost-tracker image: ghcr.io/jayzsec/cost-tracker:latest envFrom: - configMapRef: name: cost-tracker-config - secretRef: name: cost-tracker-secret restartPolicy: OnFailure
-
Apply everything to the cluster:
kubectl apply -f kubernetes/configmap.yaml # If using Sealed Secrets for the 'cost-tracker-secret', apply it here. # kubectl apply -f kubernetes/sealed-secret.yaml kubectl apply -f kubernetes/serviceaccount.yaml kubectl apply -f kubernetes/cronjob.yaml
Step 3: Final Verification
Run a manual job to test the entire setup immediately.
# Create a new test job
kubectl create job irsa-test-run --from=cronjob/cost-tracker-cronjob
# Check the logs after a few seconds
sleep 5
POD_NAME=$(kubectl get pods --selector=job-name=irsa-test-run --output=jsonpath='{.items[0].metadata.name}')
kubectl logs $POD_NAME
Key Takeaways and Lessons from the Trenches
The path to a successful IRSA implementation was paved with common, real-world errors. Here are the most valuable lessons learned:
-
The Trust Relationship is Everything. The most frequent point of failure was the IAM Role’s Trust Policy. The final error log,
AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity
, pointed directly to this. A single typo in the OIDC provider URL or the service account name in the policy’sCondition
will cause AWS to reject the pod’s identity. Always generate this policy programmatically or double-check it meticulously. -
IRSA is an EKS Feature. My initial attempts on a local
kind
cluster failed because the EKS Pod Identity Webhook—the component that injects the identity token—doesn’t exist there. This highlighted the importance of understanding your target platform’s capabilities. For local development, you must fall back to other methods, like injecting temporary credentials via a secret. -
kubectl
describe is Your Best Friend. When a pod fails to start, logs aren’t always available.kubectl describe pod <pod-name>
was invaluable. It revealedCreateContainerConfigError
when aSecret
was missing and pointed towardsImagePullBackOff
when my container registry was private. TheEvents
section at the bottom is pure gold for debugging. -
kubectl
Auth vs. AWS Auth. I ran into a phase wherekubectl
itself was getting access denied errors. This taught me the crucial difference between authenticating to the Kubernetes API server (managed by thekubeconfig
and theaws-auth
ConfigMap) and the pod authenticating to AWS services (managed by IRSA). They are two separate security boundaries that must both be configured correctly.
Conclusion
Setting up IRSA is more than just a configuration task; it’s an exercise in understanding modern cloud-native security principles. It forces you to think about least privilege, ephemeral credentials, and the interplay between your orchestration platform (Kubernetes) and your cloud provider (AWS). While the path can be tricky, the result is a secure, professional, and auditable system that is the standard for any production workload on EKS. Mastering this process is a key differentiator for any engineer operating in the cloud.