I'm using this conf file to overwrite the @timestamp field in ElasticSearch, but I automatically get an _dateparsefailure
flag:
input {
jdbc {
jdbc_driver_library => "C:/path/to/mariadb-java-client.jar"
statement => "SELECT '${FIELD}' as field, from ${TABLE_NAME}"
tracking_column => "timestamp"
tracking_column_type => "timestamp"
}
}
filter {
grok {
match => ["timestamp","%{TIMESTAMP_ISO8601}"]
}
date {
match => ["timestamp", "ISO8601"]
}
}
Note that with or without the grok filter I get the same result.
The result:
{
"@timestamp" => 2022-12-13T09:16:10.365Z,
"timestamp" => 2022-11-23T10:36:13.000Z,
"@version" => "1",
"tags" => [
[0] "_dateparsefailure"
],
"type" => "mytype",
}
But when I extract the timestamp with this conf:
input {
*same input*
}
filter {
grok {
match => ["timestamp","%{TIMESTAMP_ISO8601:tmp}"]
tag_on_failure => [ "_grokparsefailure"]
}
date {
match => ["tmp", "ISO8601"]
}
}
then it give me the expected result:
{
"@timestamp" => 2022-11-23T11:16:36.000Z,
"@version" => "1",
"timestamp" => 2022-11-23T11:16:36.000Z,
"tmp" => "2022-11-23T11:16:36.000Z",
}
Can anyone explain me why is that and how can I avoid create this extra field ? Thanks
CodePudding user response:
Ok,
The first parse a string I guess, but timestamp
already has the right type,
So a copy is enough to save and overwrite the @timestamp
field:
filter {
mutate {
copy => { "@timestamp" => "insertion_timestamp" }
copy => { "timestamp" => "@timestamp" }
remove_field => [ "timestamp" ]
}
}
CodePudding user response:
If the database column type is a timestamp then the jdbc input will automatically convert the field to a LogStash::Timestamp object, not a string. A date filter cannot parse a Timestamp object, and will add a _dateparsefailure tag.
A grok filter calls .to_s to convert everything to a string before matching it, so if you grok the timestamp from the Timestamp object it will be a string that the date filter can parse.