I transfer logfiles with filebeat to elasticsearch. The data are analyzed with kibana.
Now to my problem: Kibana shows not the timestamp from the logfile. Kibana shows the time of the transmission in @timestamp.
I want to show the timestamp from the logfile in kibana. But the timestamp in the logfile is overwritten.
Where is my fault? Has anyone a solution for my problem?
Thanks for any support!
CodePudding user response:
Based upon the question, this could be one potential option, which would be to use filebeat processors. What you could do is write that initial @timestamp value to another field, like event.ingested, using the following script below:
#Script to move the timestamp to the event.ingested field
- script:
lang: javascript
id: init_format
source: >
function process(event) {
var fieldTest = event.Get("@timestamp");
event.Put("event.ingested", fieldTest);
}
And then the last processor you write could move that event.ingested field to @timestamp again using the following processor:
#setting the timestamp field to the Date/time when the event originated, which would be the event.created field
- timestamp:
field: event.created
layouts:
- '2006-01-02T15:04:05Z'
- '2006-01-02T15:04:05.999Z'
- '2006-01-02T15:04:05.999-07:00'
test:
- '2019-06-22T16:33:51Z'
- '2019-11-18T04:59:51.123Z'
- '2020-08-03T07:10:20.123456 02:00'
CodePudding user response:
Here a example from my logfile and the my filebeat config.
{"@timestamp":"2022-06-23T10:40:25.852 02:00","@version":1,"message":"Could not refresh JMS Connection]","logger_name":"org.springframework.jms.listener.DefaultMessageListenerContainer","level":"ERROR","level_value":40000}
## Filebeat configuration
## https://github.com/elastic/beats/blob/master/deploy/docker/filebeat.docker.yml
#
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
# The Docker autodiscover provider automatically retrieves logs from Docker
# containers as they start and stop.
- type: docker
hints.enabled: true
filebeat.inputs:
- type: filestream
id: pls-logs
paths:
- /usr/share/filebeat/logs/*.log
parsers:
- ndjson:
processors:
- add_cloud_metadata: ~
output.elasticsearch:
hosts: ['http://elasticsearch:9200']
username: elastic
password:
## HTTP endpoint for health checking
## https://www.elastic.co/guide/en/beats/filebeat/current/http-endpoint.html
#
http.enabled: true
http.host: 0.0.0.0