I am ingesting events from PubSub using Filebeat and send them to ES for indexing/vizualization. I noticed that under a particular high test load - not all events get to ES. So I'm trying to debug the pipeline - trying to figure out where the drop is happening.
I am hoping that I could get some insight into what is going on in the Filebeat by monitoring Filebeat itself and sending the metrics to the same ES cluster (hosted on elastic.io).
So I did:
-- enabled XPack monitoring in Elastic.io cluster as following:
-- enabled monitoring in the filebeat.yaml:
monitoring.enabled: true
monitoring.elasticsearch:
api_key: ${ES_API_KEY}
having the elastic output configured as following:
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
enabled: true
index: "ibc-parsed-logs"
parameters.pipeline: "geoip-info"
hosts: ${ES_HOSTS}
# Authentication credentials - either API key or username/password.
api_key: ${ES_API_KEY}
According to the Elastic docs - if I use an elasticsearch output - then the cluster ID/ auth/credentials will be determined from the above output config...
I also enabled logging of the monitoring metrics:
logging.metrics.enabled: true
When I run Filebeat with this configuration, I see that monitoring metrics are indeed collected - I see lots of logs like:
2022-09-30T01:58:49.765Z INFO [monitoring] log/log.go:192 Total metrics {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000},"quota":{"us":0}},"id":"/","stats":{"periods":0,"throttled":{"ns":0,"periods":0}}},"cpuacct":{"id":"/","total":{"ns":1609969280422}},"memory":{"id":"/","mem":{"limit":{"bytes":9223372036854771712},"usage":{"bytes":59994112}}}},"cpu":{"system":{"ticks":950350,"time":{"ms":950351}},"total":{"ticks":1608520,"time":{"ms":1608525},"value":1608520},"user":{"ticks":658170,"time":{"ms":658174}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"2f0fb51b-0dc7-4ea6-97ea-d9f07f7a9dd6","uptime":{"ms":15354077},"version":"7.15.0"},"memstats":{"gc_next":27183328,"memory_alloc":25632752,"memory_sys":77874184,"memory_total":51893040880,"rss":132669440},"runtime":{"goroutines":19}},"filebeat":{"events":{"active":0,"added":3095135,"done":3095135},"harvester":{"closed":0,"open_files":0,"running":0,"skipped":0,"started":0},"input":{"log":{"files":{"renamed":0,"truncated":0}},"netflow":{"flows":0,"packets":{"dropped":0,"received":0}}}},"libbeat":{"config":{"module":{"running":0,"starts":0,"stops":0},"reloads":0,"scans":0},"output":{"events":{"acked":3055775,"active":100,"batches":62013,"dropped":0,"duplicates":39360,"failed":0,"toomany":0,"total":3095235},"read":{"bytes":61600055,"errors":3},"type":"elasticsearch","write":{"bytes":3728037960,"errors":0}},"pipeline":{"clients":0,"events":{"active":0,"dropped":0,"failed":0,"filtered":0,"published":3095135,"retry":350,"total":3095135},"queue":{"acked":3095135,"max_events":4096}}},"registrar":{"states":{"cleanup":0,"current":0,"update":0},"writes":{"fail":0,"success":0,"total":0}},"system":{"cpu":{"cores":8},"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}
However, when I go to the ES cluster -> Observability -> Metrics -> Inventory I only see this message: "Looks like you don't have any metrics indices." - and no metrics whatsoever - nothing in Kibana, no indexes with any metrics ...
Why are not metrics sent/displayed to ES? Did I miss some other configuration settings?
Thank you! Marina
CodePudding user response:
Tldr;
I think they are multiple question in your post.
- How to investigate dropped messages ?
- How to access monitoring data of filebeat ?
I may have missed it, but posting the stack version you are running is always welcome.