Build Elk (Elasticsearch+logstash+kibana) Log Analysis System (15) Logstash write configuration in multiple files

Source: Internet
Author: User
Tags kibana logstash

Summary
When we use Logsatsh to write the configuration file, if we read too many files, the matching is too much, will make the configuration file hundreds of thousands of lines of code, may cause reading and modification difficulties. At this time, we can put the configuration file input, filter, output in a different configuration file, or even the input, filter, output again separated, put in a different file.
At this time, the later need to delete and change the contents of the search, it is easy to maintain.

1, Logstash How to read more than one configuration file

We know that when we start logstash, we can load the configuration file with-f/you_path_to_config_file, if we need to load more than one configuration file, we just need to-f/you_path_to_config_ Directory is OK. To put it simply, it is possible to add a directory behind-F.
Note: The directory cannot be appended with the * number, otherwise only one file will be read, but when reading the log file, * can match all, for example, sys.log* can match all log files starting with Sys.log, such as SYS.LOG1,SYS.LOG2.

Examples are as follows:

For example, the/home/husen/config/directory has
//in1.conf, in2.conf, filter1.conf, filter2.conf, out.conf these 5 files

//We use/ Logstash-5.5.1/bin/logstash-f/home/husen/config boot Logtstash
//logstash automatically loads this 5 configuration file and merges it into 1 whole profiles

2, Logstash multiple configuration files in the input, filter, output is independent of each other

The answer: NO.

Like what:

# # IN1.CONF content is as follows:
input{
    file{
        path=>[
            "/home/husen/log/sys.log"
        }}

# # In2.conf contents are as follows:
input{
    file{
        path=>[
            "/home/husen/log/error.log"
    }

} # # out1.conf as follows
Elasticsearch {
        action => "index"          
        hosts  => "localhost:9200" 
        Index  => "From_sys_log"
        codec => "JSON"
    }

# # out2.conf as follows
Elasticsearch {
        action => "Index"          
        hosts  => "localhost:9200" 
        index  => "From_error_log"
        codec "JSON"
    }
The purpose of these profiles is to:
///want to establish the index of the Sys.log read in in1.conf to From_sys_log
///To establish Error.log index of in.conf read in Femo_error_log

//logstash-5.5.1/bin/logstash-f/home/husen/config

//Start, you will find that in1.conf log was output two times, in2.conf read the log was also output twice

//Conclusion: Logstash reading multiple profiles is simply consolidating all of the configuration files together.
//If you want to be independent of each other, you need to add the field, and then Judge
//For example read to different servers in the same format of the log, then filter can be shared
//But the output of the index needs to be established separately to improve the degree of identification

3, Logstash read multiple configuration file recommended configuration method

If you want to be in the configuration file, separate some parts, but also share some parts, such as my door to improve the same log from different servers, need to use the same filter, but to create a different index of the problem, how to do.
It is recommended to use tags or type these two special fields, that is, when reading files, add identifiers in tags or define type variables.

Examples are as follows:

# # IN1.CONF content as follows:
input{
    file{
        path=>[
            "/home/husen/log/sys.log"
        ]
        type => "From_sys" "
        #tags => [" From_sys]
    }
}

# in2.conf content is as follows:
input{
    file{
        path=>[
            "/ Home/husen/log/error.log "
        ]
        type =>" From_error "
        #tags => [" From_sys "]
    }
}

# # Out1.conf the following
if [type] = = "From_sys" {
#if "From_sys" in [tags]
    elasticsearch {
        action => "index" C24/>hosts  => "localhost:9200" 
        index  => "From_sys_log" codec =>
        "JSON"
    }
}

# # Out2.conf The following
if [type] = = "From_error" {
#if "from_error" in [tags]
    elasticsearch {
        action = > "Index"          
        hosts  => "localhost:9200" 
        index  => "From_error_log"
        codec "JSON"
    }
}

In particular, if you want to grok parsing with different filter types for different type logs,
//You can also use a similar method to determine

Yes, it's done.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.