Home > Mobile >  Add Kubernetes scrape target to Prometheus instance that is NOT in Kubernetes
Add Kubernetes scrape target to Prometheus instance that is NOT in Kubernetes

Time:12-29

I run prometheus locally as http://localhost:9090/targets with

docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus

and want to connect it to several Kubernetes (cluster) instances we have. See that scraping works, try Grafana dashboards etc.

And then I'll do the same on dedicated server that will be specially for monitoring. However all googling gives me all different ways to configure prometheus that is already within one Kubernetes instance, and no way to read metrics from external Kubernetes.

How to add Kubernetes scrape target to Prometheus instance that is NOT in Kubernetes?


I have read Where Kubernetes metrics come from and checked that my (first) Kubernetes cluster has the Metrics Server.

kubectl get pods --all-namespaces | grep metrics-server 

There is definitely no sense to add Prometheus instance into every Kubernetes (cluster) instance. One Prometheus must be able to read metrics from many Kubernetes clusters and every node within them.

P.S. Some old question has answer to install Prometheus in every Kubernetes and then use federation, that is just opposite from what I am looking for.

P.P.S. It is also strange for me, why Kubernetes and Prometheus that are #1 and #2 projects from Cloud Native Foundation don't have simple "add Kubernetes target in Prometheus" button or simple step.

CodePudding user response:

In my opinion, deploying a Prometheus instance in each cluster is more simple and clean way than organizing external access. The problem is that the targets discovered with kubernetes_sd_configs are cluster-internal DNS-names and IP-addresses (or at least, it is so in my AWS EKS cluster). To resolve these, you have to be inside the cluster.

This problem can be resolved by using a proxy. The configuration below uses API-server proxy endpoint to reach targets. I'm not sure about its performance in large clusters, I guess a dedicated proxy in this case would be a better fit.

External access via API-server proxy

First thing you need to get things running is a CA certificate of your API-server. There are several ways to get it but getting it from kubeconfig appears to me as the simplest:

❯ k config view --raw
apiVersion: v1
clusters:
- cluster:                      # you need this ⤋ long value 
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJ...
    server: https://api-server.example.com
  name: default
...

The certificate in kubeconfig is base64-encoded so you have to decode it first:

echo LS0tLS1CRUdJTiBDRVJUSUZJ... | base64 -d > CA.crt

You also need a service account token with proper permission but this is out of the scope of this answer. Assuming you already have it, let's proceed to Prometheus configuration:

- job_name: 'kubelet-cadvisor'
  scheme: https

  kubernetes_sd_configs:
  - role: node
    api_server: https://api-server.example.com

    # TLS and auth settings to perform service discovery
    authorization:
      credentials_file: /kube/token  # the file with your service account token
    tls_config:
      ca_file: /kube/CA.crt  # the file with the CA certificate you got from kubeconfig

  # The same as above but for actual scrape request.
  # We're going to request API-server to be a proxy so the creds are the same.
  bearer_token_file: /kube/token
  tls_config:
    ca_file: /kube/CA.crt

  relabel_configs:
  # This is just to drop this long __meta_kubernetes_node_label_ prefix
  - action: labelmap
    regex: __meta_kubernetes_node_label_(. )

  # By default Prometheus goes to /metrics endpoint.
  # This relabeling changes it to /api/v1/nodes/[kubernetes_io_hostname]/proxy/metrics/cadvisor
  - source_labels: [kubernetes_io_hostname]
    replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
    target_label: __metrics_path__

  # This relabeling defines that Prometheus should connect to the
  # API-server instead of the actual instance. Together with the relabeling
  # from above this will make the scrape request proxied to the node kubelet.
  - replacement: api-server.example.com
    target_label: __address__

The above is tailored for scraping role: node. To make it working with other roles, you've got to change __metrics_path__ label. This doc can help constructing the path.

CodePudding user response:

If I understand your question, you want to monitor kubernetes cluster where prometheus is not installed or remote kubernetes cluster.

I monitor many different kubernetes cluster from one prometheus which is installed on a standalone server.

You can do this by generating a token on the kubernetes server using a service account which has proper permission to access the kubernetes api.

Kubernetes-api:

Following are the details required to configure prometheus scrape job.

  1. Create a service account which has permissions to read and watch the pods.
  2. Generate token from the service account.
  3. Create scrape job as following.
- job_name: kubernetes
  kubernetes_sd_configs:
  - role: node
    api_server: https://kubernetes-cluster-api.com
    tls_config:
      insecure_skip_verify: true
      bearer_token: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  bearer_token: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  scheme: https
  tls_config:
    insecure_skip_verify: true
  relabel_configs:
  - separator: ;
    regex: __meta_kubernetes_node_label_(. )
    replacement: $1
    action: labelmap

I have explained the same in detail in the below article.

https://amjadhussain3751.medium.com/monitor-remote-kubernetes-cluster-using-prometheus-a3781b041745

CodePudding user response:

There are many agents capable of saving metrics collected in k8s to remote Prometheus server outside the cluster, example Prometheus itself now support agent mode, exporter from Opentelemetry, or using managed Prometheus etc.

  • Related