Home > database >  AWS IAM Role - AccessDenied error in one pod
AWS IAM Role - AccessDenied error in one pod

Time:11-16

I have a service account which I am trying to use across multiple pods installed in the same namespace.

One of the pods is created by Airflow KubernetesPodOperator. The other is created via Helm through Kubernetes deployment.

In the Airflow deployment, I see the IAM role being assigned and DynamoDB tables are created, listed etc however in the second helm chart deployment (or) in a test pod (created as shown here), I keep getting AccessDenied error for CreateTable in DynamoDB.

I can see the AWS Role ARN being assigned to the service account and the service account being applied to the pod and the corresponding token file also being created, but I see AccessDenied exception.

arn:aws:sts::1234567890:assumed-role/MyCustomRole/aws-sdk-java-1636152310195 is not authorized to perform: dynamodb:CreateTable on resource

ServiceAccount

Name:                mypipeline-service-account
Namespace:           abc-qa-daemons
Labels:              app.kubernetes.io/managed-by=Helm
                     chart=abc-pipeline-main.651
                     heritage=Helm
                     release=ab-qa-pipeline
                     tier=mypipeline
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::1234567890:role/MyCustomRole
                     meta.helm.sh/release-name: ab-qa-pipeline
                     meta.helm.sh/release-namespace: abc-qa-daemons
Image pull secrets:  <none>
Mountable secrets:   mypipeline-service-account-token-6gm5b
Tokens:              mypipeline-service-account-token-6gm5b

P.S: Both the client code created using KubernetesPodOperator and through Helm chart deployment is same i.e. same docker image. Other attributes like nodeSelector, tolerations etc, volume mounts are also same.

The describe pod output for both of them is similar with just some name and label changes. The KubernetesPodOperator pod has QoS class as Burstable while the Helm chart ones is BestEffort.

Why do I get AccessDenied in Helm deployment but not in KubernetesPodOperator? How to debug this issue?

CodePudding user response:

Whenever we get an AccessDenied exception, there can be two possible reasons:

  1. You have assigned the wrong role
  2. The assigned role doesn't have necessary permissions

In my case, latter is the issue. The permissions assigned to particular role can be sophisticated i.e. they can be more granular.

For example, in my case, the DynamoDB tables which the role can create/describe is limited to only those that are starting with a specific prefix but not all the DynamoDB tables.

So, it is always advisable to check the IAM role permissions whenever you get this error.

As stated in the question, be sure to check the service account using the awscli image.


Keep in mind that, there is a credential provider chain used in AWS SDKs which determines the credentials to be used by the application. In most cases, the DefaultAWSCredentialsProviderChain is used and its order is given below. Ensure that the SDK is picking up the intended provider (in our case it is WebIdentityTokenCredentialsProvider)

 super(new EnvironmentVariableCredentialsProvider(),
              new SystemPropertiesCredentialsProvider(),
              new ProfileCredentialsProvider(),
              WebIdentityTokenCredentialsProvider.create(),
              new EC2ContainerCredentialsProviderWrapper());

Additionally, you might also want to set the AWS SDK classes to DEBUG mode in your logger to see which credentials provider is being picked up and why.


To check if the service account is applied to a pod, describe it and check if the AWS environment variables are set to it like AWS_REGION, AWS_DEFAULT_REGION, AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE.

If not, then check your service account if it has the AWS annotation eks.amazonaws.com/role-arn by describing that service account.

  • Related