Home > OS >  Configure Prometheus to scrape all pods in a cluster
Configure Prometheus to scrape all pods in a cluster

Time:07-13

Hey I am currently trying to configure the scrape config of a prometheus agent to gather all pods in a cluster, all I really care about is tracking cpu and memory right now but other metrics don't hurt. I can get the kubernetes resources and prometheus, related metrics out of the cluster, but I can't get any metrics from a test pod running (Its a basic node js express application)

Additionally I'm wondering if each pod needs to export metrics to prometheus for cpu/memory information or if this should just be covered by the kubelet running on the node?

Any information would be helpful, below is the configuration and some of the debugging I've done so far.

I have the following scrape config specified:

     remote_write:
          - url: http://xxxx.us-east-1.elb.amazonaws.com/

      scrape_configs:
          - job_name: 'kubernetes-pods'

            kubernetes_sd_configs:
                - role: pod
                  api_server: https://kubernetes.default.svc
                  tls_config:
                      insecure_skip_verify: true
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            relabel_configs:
                - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
                  action: replace
                  target_label: __metrics_path__
                  regex: (. )
                - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
                  action: replace
                  regex: (. ):(?:\d );(\d )
                  replacement: ${1}:${2}
                  target_label: __address__
                - action: labelmap
                  regex: __meta_kubernetes_pod_label_(. )
                - source_labels: [__meta_kubernetes_namespace]
                  action: replace
                  target_label: kubernetes_namespace
                - source_labels: [__meta_kubernetes_pod_name]
                  action: replace
                  target_label: kubernetes_pod_name

          - job_name: 'kubernetes-kubelet'
            scheme: https
            tls_config:
                insecure_skip_verify: true
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            kubernetes_sd_configs:
                - role: node
                  api_server: https://kubernetes.default.svc
                  tls_config:
                      insecure_skip_verify: true
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            relabel_configs:
                - action: labelmap
                  regex: __meta_kubernetes_node_label_(. )
                - source_labels: [__meta_kubernetes_node_name]
                  regex: (. )
                  target_label: __metrics_path__
                  replacement: /api/v1/nodes/${1}/proxy/metrics

          - job_name: 'kubernetes-cadvisor'
            scheme: https
            tls_config:
                insecure_skip_verify: true
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            kubernetes_sd_configs:
                - role: node
                  api_server: https://kubernetes.default.svc
                  tls_config:
                      insecure_skip_verify: true
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            relabel_configs:
                - action: labelmap
                  regex: __meta_kubernetes_node_label_(. )
                - source_labels: [__meta_kubernetes_node_name]
                  regex: (. )
                  target_label: __metrics_path__
                  replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

When running kubectl logs prometheus_pod_name I don't see any errors:

ts=2022-07-12T23:12:49.302Z caller=main.go:491 level=info msg="No time or size retention was set so using the default time retention" duration=15d
ts=2022-07-12T23:12:49.303Z caller=main.go:535 level=info msg="Starting Prometheus Server" mode=server version="(version=2.36.2, branch=HEAD, revision=d7e7b8e04b5ecdc1dd153534ba376a622b72741b)"
ts=2022-07-12T23:12:49.303Z caller=main.go:540 level=info build_context="(go=go1.18.3, user=root@f051ce0d6050, date=20220620-13:21:35)"
ts=2022-07-12T23:12:49.303Z caller=main.go:541 level=info host_details="(Linux 5.4.196-108.356.amzn2.x86_64 #1 SMP Thu May 26 12:49:47 UTC 2022 x86_64 prometheus-5bbc9d5cf9-hrmbr (none))"
ts=2022-07-12T23:12:49.303Z caller=main.go:542 level=info fd_limits="(soft=1048576, hard=1048576)"
ts=2022-07-12T23:12:49.303Z caller=main.go:543 level=info vm_limits="(soft=unlimited, hard=unlimited)"
ts=2022-07-12T23:12:49.307Z caller=web.go:553 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
ts=2022-07-12T23:12:49.308Z caller=main.go:972 level=info msg="Starting TSDB ..."
ts=2022-07-12T23:12:49.309Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false
ts=2022-07-12T23:12:49.311Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
ts=2022-07-12T23:12:49.311Z caller=head.go:536 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.925µs
ts=2022-07-12T23:12:49.311Z caller=head.go:542 level=info component=tsdb msg="Replaying WAL, this may take a while"
ts=2022-07-12T23:12:49.311Z caller=head.go:613 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
ts=2022-07-12T23:12:49.311Z caller=head.go:619 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=24.899µs wal_replay_duration=267.82µs total_replay_duration=321.491µs
ts=2022-07-12T23:12:49.313Z caller=main.go:993 level=info fs_type=XFS_SUPER_MAGIC
ts=2022-07-12T23:12:49.313Z caller=main.go:996 level=info msg="TSDB started"
ts=2022-07-12T23:12:49.313Z caller=main.go:1177 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
ts=2022-07-12T23:12:49.315Z caller=dedupe.go:112 component=remote level=info remote_name=8ffa18 url=http://xxxx.elb.amazonaws.com/ msg="Starting WAL watcher" queue=8ffa18
ts=2022-07-12T23:12:49.315Z caller=dedupe.go:112 component=remote level=info remote_name=8ffa18 url=http://xxxx.elb.amazonaws.com/ msg="Starting scraped metadata watcher"
ts=2022-07-12T23:12:49.316Z caller=main.go:1214 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.904525ms db_storage=866ns remote_storage=1.510606ms web_handler=314ns query_engine=762ns scrape=251.061µs scrape_sd=663.267µs notify=1.042µs notify_sd=2.62µs rules=1.523µs tracing=4.328µs
ts=2022-07-12T23:12:49.317Z caller=main.go:957 level=info msg="Server is ready to receive web requests."
ts=2022-07-12T23:12:49.318Z caller=dedupe.go:112 component=remote level=info remote_name=8ffa18 url=http://xxxx.elb.amazonaws.com/ msg="Replaying WAL" queue=8ffa18
ts=2022-07-12T23:12:49.318Z caller=manager.go:937 level=info component="rule manager" msg="Starting rule manager..."
ts=2022-07-12T23:12:56.818Z caller=dedupe.go:112 component=remote level=info remote_name=8ffa18 url=http://xxxx.us-east-1.elb.amazonaws.com/ msg="Done replaying WAL" duration=7.500419538s

For the currently running pods (if its helpful):

❯ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       paigo-agent-transformer-58fc696d66-nz6z2   1/1     Running   0          18h
default       prometheus-5bbc9d5cf9-hrmbr                1/1     Running   0          13m
kube-system   aws-node-97f6p                             1/1     Running   0          5d6h
kube-system   aws-node-lnb4g                             1/1     Running   0          5d6h
kube-system   aws-node-m7dsb                             1/1     Running   0          5d6h
kube-system   coredns-7f5998f4c-25f92                    1/1     Running   0          5d6h
kube-system   coredns-7f5998f4c-jdtbk                    1/1     Running   0          5d6h
kube-system   kube-proxy-2f97k                           1/1     Running   0          5d6h
kube-system   kube-proxy-flgw7                           1/1     Running   0          5d6h
kube-system   kube-proxy-hw2rr                           1/1     Running   0          5d6h
kube-system   metrics-server-64cf6869bd-x4xgb            1/1     Running   0          5h58m

I have additionally confirmed that the data is being sent correctly as well to the remote endpoint.

Stuff I read so far: How to discover pods for prometheus to scrape Prometheus auto discovery K8s

I think is more than likely there is something obvious I missed, and I just don't know enough on how to debug it.

CodePudding user response:

These metrics typically come from kube-state-metrics which is included as part of the Prometheus-operator/kube-prometheus-stack helm chart. Once you have it installed in your cluster, you'll have a pod like this:

prom-mfcloud-kube-state-metrics-7d947c8c5c-4rgz6         1/1     Running   2 (4d21h ago)   4d21h
  • Related