Using codec's multiline plug-in to implement multi-line matching, this is a plug-in that can merge multiple rows, and you can use what to specify whether the matching rows will be merged with the previous rows or merged with the following rows.
1.java Log Collection Test
input { stdin { codec => multiline { pattern => "^\[" //以"["开头进行正则匹配 negate => true //正则匹配成功 what => "previous" //和前面的内容进行合并 } }}output { stdout { codec => rubydebug }}
2. View the Elasticsearch log, beginning with "["
# cat /var/log/elasticsearch/cluster.log [2018-05-29T08:00:03,068][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [systemlog-2018.05.29] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings [][2018-05-29T08:00:03,192][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [systemlog-2018.05.29/DCO-zNOHQL2sgE4lS_Se7g] create_mapping [system][2018-05-29T11:29:31,145][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [securelog-2018.05.29] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings [][2018-05-29T11:29:31,225][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [securelog-2018.05.29/ABd4qrCATYq3YLYUqXe3uA] create_mapping [secure]
3. Configure Logstash
#vim /etc/logstash/conf.d/java.confinput { file { path => "/var/log/elasticsearch/cluster.log" type => "elk-java-log" start_position => "beginning" stat_interval => "2" codec => multiline { pattern => "^\[" negate => true what => "previous" } }}output { if [type] == "elk-java-log" { elasticsearch { hosts => ["192.168.1.31:9200"] index => "elk-java-log-%{+YYYY.MM.dd}" } }}
4. Start
5.head Plugin View
6.kibana Add Log
Logstash collection of Java logs, multiple lines merged into one line