Home > Software design >  Pod not able to read ConfigMap despite Role and RoleBinding being in place
Pod not able to read ConfigMap despite Role and RoleBinding being in place

Time:10-18

I would like permit a Kubernetes pod in namespace my-namespace to access configmap/config in the same namespace. For this purpose I have defined the following role and rolebinding:

apiVersion: v1
kind: List
items:
- kind: Role
  apiVersion: rbac.authorization.k8s.io/v1
  metadata:
    name: config
    namespace: my-namespace
  rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["config"]
    verbs: ["get"] 
- kind: RoleBinding
  apiVersion: rbac.authorization.k8s.io/v1
  metadata:
    name: config
    namespace: my-namespace
  subjects:
  - kind: ServiceAccount
    name: default
    namespace: my-namespace
  roleRef:
    kind: Role
    name: config
    apiGroup: rbac.authorization.k8s.io

Yet still, the pod runs into the following error:

configmaps \"config\" is forbidden: User \"system:serviceaccount:my-namespace:default\" 
cannot get resource \"configmaps\" in API group \"\" in the namespace \"my-namespace\"

What am I missing? I guess it must be a simple thing, which a second pair of eyes may spot immediately.

UPDATE Here is a relevant fragment of my client code, which uses go-client:

cfg, err := rest.InClusterConfig()
if err != nil {
        logger.Fatalf("cannot obtain Kubernetes config: %v", err)
}
k8sClient, err := k8s.NewForConfig(cfg)
if err != nil {
        logger.Fatalf("cannot create Clientset")
}       
configMapClient := k8sClient.CoreV1().ConfigMaps(Namespace)

configMap, err := configMapClient.Get(ctx, "config", metav1.GetOptions{})
if err != nil {
        logger.Fatalf("cannot obtain configmap: %v", err) // error occurs here
}

CodePudding user response:

I don't see anything in particular wrong with your Role or Rolebinding, and in fact when I deploy them into my environment they seem to work as intended. You haven't provided a complete reproducer in your question, so here's how I'm testing things out:

  • I started by creating a namespace my-namespace

  • I have the following in kustomization.yaml:

    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    namespace: my-namespace
    
    commonLabels:
      app: rbactest
    
    resources:
    - rbac.yaml
    - deployment.yaml
    
    generatorOptions:
      disableNameSuffixHash: true
    
    configMapGenerator:
      - name: config
        literals:
          - foo=bar
          - this=that
    
  • In rbac.yaml I have the Role and RoleBinding from your question (without modification).

  • In deployment.yaml I have:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cli
    spec:
      replicas: 1
      template:
        spec:
          containers:
            - name: cli
              image: quay.io/openshift/origin-cli
              command:
                - sleep
                - inf
    

With this in place, I deploy everything by running:

kubectl apply -k .

And then once the Pod is up and running, this works:

$ kubectl exec -n my-namespace deploy/cli -- kubectl get cm config
NAME     DATA   AGE
config   2      3m50s

Attempts to access other ConfigMaps will not work, as expected:

$ kubectl exec deploy/cli -- kubectl get cm foo
Error from server (Forbidden): configmaps "foo" is forbidden: User "system:serviceaccount:my-namespace:default" cannot get resource "configmaps" in API group "" in the namespace "my-namespace"
command terminated with exit code 1

If you're seeing different behavior, it would be interesting to figure out where your process differs from what I've done.


Your Go code looks fine also; I'm able to run this in the "cli" container:

package main

import (
    "context"
    "fmt"
    "log"

    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/rest"
)

func main() {
    config, err := rest.InClusterConfig()
    if err != nil {
        panic(err.Error())
    }
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
        panic(err.Error())
    }

    namespace := "my-namespace"

    configMapClient := clientset.CoreV1().ConfigMaps(namespace)

    configMap, err := configMapClient.Get(context.TODO(), "config", metav1.GetOptions{})
    if err != nil {
        log.Fatalf("cannot obtain configmap: %v", err)
    }

    fmt.Printf("% v\n", configMap)
}

If I compile the above, kubectl cp it into the container and run it, I get as output:

&ConfigMap{ObjectMeta:{config  my-namespace  2ef6f031-7870-41f1-b091-49ab360b98da 2926 0 2022-10-15 03:22:34  0000 UTC <nil> <nil> map[app:rbactest] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","data":{"foo":"bar","this":"that"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"rbactest"},"name":"config","namespace":"my-namespace"}}
] [] [] [{kubectl-client-side-apply Update v1 2022-10-15 03:22:34  0000 UTC FieldsV1 {"f:data":{".":{},"f:foo":{},"f:this":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:app":{}}}} }]},Data:map[string]string{foo: bar,this: that,},BinaryData:map[string][]byte{},Immutable:nil,}
  • Related