Home > Enterprise >  Datadog java agent and autodiscovery
Datadog java agent and autodiscovery

Time:10-04

I'll need to monitor java springboot containers on kubernetes.

I'll probably use the installation process using helm to deploy the agent on the nodes.

I'll probably use the annotations on pods to avoid configuration file managements

I saw in the documentation that there was a jar client that you can add to each pod to monitor the containers.

If I need to monitor a springboot application, do I have to install both the datadog agent on the nodes the datadog agent on the pods to reach springboot OR will the datadog agent on the nodes be allowed to monitor a springboot agent turning into a pod using only annotations and environnement variables ?

CodePudding user response:

Datadog come with deployment and daemonset

  • cluster agent (for Kubernetes metrics) deployment
  • daemonset (for traching and logs) daemonset
helm repo add datadog https://helm.datadoghq.com
helm repo update
helm install <RELEASE_NAME> -f values.yaml  --set datadog.apiKey=<DATADOG_API_KEY> datadog/datadog --set targetSystem=<TARGET_SYSTEM>

This chart adds the Datadog Agent to all nodes in your cluster with a DaemonSet. It also optionally deploys the kube-state-metrics chart and uses it as an additional source of metrics about the cluster. A few minutes after installation, Datadog begins to report hosts and metrics.

Logs: For logs and APM you need some extra config

datadog:
  logs:
    enabled: true
    containerCollectAll: true

data-k8-logs-collection

Once everything is done, then its time to add auto-discovery again no need to install anything for auto-discovery, until you need APM (profiling)

All you need to add

        ad.datadoghq.com/CONTAINER_NAME_TO_MONITOR.check_names: |
          ["openmetrics"]
ad.datadoghq.com/CONTAINER_NAME_TO_MONITOR.init_configs: |
          [{}]
ad.datadoghq.com/CONTAINER_NAME_TO_MONITOR.instances: |
          [
           {
              "prometheus_url": "http://%%host%%:5000/internal/metrics",
              "namespace": "my_springboot_app",
              "metrics": [ "*" ]
            }
          ] 

replace 5000 with the port of the container listening. again this is required to push Prometheus/openmetrics to datadog.

If you just need logs, no need for any extra fancy stuff, just containerCollectAll: true this is enough for logs collection.

APM

You need add JAVA agent, add this in the dockerfile

RUN wget --no-check-certificate -O /app/dd-java-agent.jar https://dtdg.co/latest-java-tracer

and then you need to update CMD to let the agent collect tracing/apm/profiling

java -javaagent:/app/dd-java-agent.jar -Ddd.profiling.enabled=$DD_PROFILING_ENABLED -XX:FlightRecorderOptions=stackdepth=256 -Ddd.logs.injection=$DD_LOGS_INJECTION -Ddd.trace.sample.rate=$DD_TRACE_SAMPLE_RATE -Ddd.service=$DD_SERVICE -Ddd.env=$DD_ENV -J-server -Dhttp.port=5000 -jar sfdc-core.jar

trace_collection_java

CodePudding user response:

Kubernetes Datadog Agent and Cluster Agent will give you specifics about the nodes and pods. Documentation: https://docs.datadoghq.com/infrastructure/livecontainers/configuration/?tab=helm

For specific app metrics, Spring boot metrics can be exported using netflix spectator api. see implementation https://docs.spring.io/spring-metrics/docs/current/public/datadog

If you are using dropwizard, see also: Spring boot metrics datadog

CodePudding user response:

do I have to install both the datadog agent on the nodes the datadog agent on the pods to reach springboot

In order to get logs and metrics shipped to datadog, the daemonset datadog agent pods are sufficient to scrape the spring boot pods. With the openmetrics integration for instance, just expose the metrics through a /metrics path.

In order to get traces, you need to use the datadog java tracing library and configure it, you can start by simply set this env var to true for the apps containers DD_TRACE_ENABLED

Hope this helps

  • Related