Home > front end >  Unable to access EKS cluster using the role that created it
Unable to access EKS cluster using the role that created it

Time:11-29

I created an EKS cluster from an EC2 instance with my-cluster-role added to instance profile using aws cli:

aws eks create-cluster --name my-cluster --role-arn arn:aws:iam::012345678910:role/my-cluster-role --resources-vpc-config subnetIds=subnet-abcd123,subnet-wxyz345,securityGroupIds=sg-123456,endpointPublicAccess=false,endpointPrivateAccess=true

Kubeconfig file:

aws eks --region us-east-1 update-kubeconfig --name my-cluster

But while trying to access Kubernetes resources, I get below error:

[root@k8s-mgr ~]# kubectl get deployments --all-namespaces
Error from server (Forbidden): deployments.apps is forbidden: User "system:node:i-xxxxxxxx" cannot list resource "deployments" in API group "apps" at the cluster scope

Except for pods and services, no other resource is accessible.

Note that the cluster was created using the role my-cluster-role, as per the documentation, this role should have permissions to access the resources.

[root@k8s-mgr ~]# aws sts get-caller-identity
{
    "Account": "012345678910", 
    "UserId": "ABCDEFGHIJKKLMNO12PQR:i-xxxxxxxx", 
    "Arn": "arn:aws:sts::012345678910:assumed-role/my-cluster-role/i-xxxxxxxx"
}

Edit: Tried creating ClusterRole and ClusterRoleBinding as suggested here: https://stackoverflow.com/a/70125670/7654693

Error:

[root@k8s-mgr]# kubectl apply -f access.yaml 
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRole"
Name: "eks-console-dashboard-full-access-clusterrole", Namespace: ""
from server for: "access.yaml": clusterroles.rbac.authorization.k8s.io "eks-console-dashboard-full-access-clusterrole" is forbidden: User "system:node:i-xxxxxxxx" cannot get resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "eks-console-dashboard-full-access-binding", Namespace: ""

Below is my Kubeconfig:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: CERT
    server: SERVER ENDPOINT
  name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
    user: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
  name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - my-cluster
      command: aws

CodePudding user response:

Create a cluster role and cluster role binding, or a role and role binding

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: eks-console-dashboard-full-access-clusterrole
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - namespaces
  - pods
  verbs:
  - get
  - list
- apiGroups:
  - apps
  resources:
  - deployments
  - daemonsets
  - statefulsets
  - replicasets
  verbs:
  - get
  - list
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - get
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: eks-console-dashboard-full-access-binding
subjects:
- kind: Group
  name: eks-console-dashboard-full-access-group
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: eks-console-dashboard-full-access-clusterrole
  apiGroup: rbac.authorization.k8s.io

You can read more at : https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/

Update role

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: CERT
    server: SERVER ENDPOINT
  name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
    user: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
  name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
current-context: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:ACCOUNT_ID:cluster/my-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - my-cluster
      - --role
      - arn:aws:iam::1023456789:role/prod-role-iam-user-EksUserRole-992Y0S0BSVNT
      command: aws

add role details to config

      - --role
      - arn:aws:iam::1023456789:role/prod-role-iam-user-EksUserRole-992Y0S0BSVNT
      command: aws
      env:
      - name: AWS_PROFILE
        value: my-prod

or else

  - --role-arn
  - arn:aws:iam::1213:role/eks-cluster-admin-role-dfasf
  command: aws-vault
  env: null

CodePudding user response:

There is a apparently a mismatch between the IAM user, that created the cluster, and the one is taken from your kubeconfig file while authenticating to your EKS cluster. You can tell it by RBAC's error output.

The quote from aws eks cli's reference

--role-arn (string) To assume a role for cluster authentication, specify an IAM role ARN with this option. For example, if you created a cluster while assuming an IAM role, then you must also assume that role to connect to the cluster the first time.

Probable solution, please update your kubeconfig file accordingly with command:

aws eks my-cluster update-kubeconfig --role-arn arn:aws:iam::012345678910:role/my-cluster-role
  • Related