Home > Net >  Write Logfiles to Slow Disk or sending Tomcat Access Logs to ElasticSearch?
Write Logfiles to Slow Disk or sending Tomcat Access Logs to ElasticSearch?

Time:10-09

  • My service (tomcat/java) is running on a kubernetes cluster (AKS).
  • I would like to write the log files (tomcat access logs, application logs with logback) to an AzureFile volume.
  • I do not want to write the access logs to the stdout, because I do not want to mix the access logs with the application logs.

Question

I expect that all logging is done asynchronously, so that writing to the slow AzureFile volume should not affect the performance. Is this correct?

Update

In the end I want to collect the logfiles so that I can send all logs to ElasticSearch.

Especially I need a way to collect the access logs.

CodePudding user response:

Yes, you are right but still here depends on how you are writing the logs. If asynchronously you are writing long process will take and your files system is slow. If it's NFS there is also the chance of network latency etc.

i have seen performance issues if attaching NFS & Bucket volume direct to multiple PODs.

If your writing is slow asyn thread might take time to complete job and take higher resources also however it still depends on code and way of written code.

Ideally, people use to store in Elasticsearch for fast retrieval easy management.

People use different stacks based on requirement but mostly all of them backed by elasticsearch for example Graylog, ELK.

For sending or writing logs to these stack people use the UDP I personally prefer GELF UDP and throws a logs at Graylog and forget.

CodePudding user response:

If you want to send your access logs to Elastic Search, you just need to extend the AbstractAccessLogValve and implement the log method.

The AbstractAccessLogValve already contains the logic to format the messages, so you need just to add the logic to send the formatted message.

  • Related