The Logstash pipeline can be configured with one or more input plug-ins, filter plug-ins, and output plug-ins. The input plug-in and the output plug-in are required, and the filter plug-in is optional. is a common usage scenario for Logstash.650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" B
Large log Platform SetupJava Environment DeploymentMany tutorials on the web, just testing hereJava-versionjava version "1.7.0_45" Java (tm) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot (tm) 64-bit Server VM (Build 24.45-b08, Mixed mode)Elasticsearch ConstructionCurl-o Https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.5.1.tar.gztar ZXVF ELASTICSEARCH-1.5.1.TAR.GZCD Elasticsearch-1.5.1/./bin/elasticsearchES here do not need to set how many things, basicall
New plugins:
Description: starting from 5.0, the plug-in is split into the gem package independently, each plug-in can be updated independently, without waiting for the logstash itself overall update, specific management commands can be consulted./bin/logstash-plugin--help Help information: /bin/logstash-plugin list In fact, all the plugins are located in t
After a week of Logstash's documentation, I finally set up an Logstash environment for Ubuntu Online. Now share your experience. About LogstashThis thing is still hot, relying on the elasticsearch under the big tree, Logstash's attention is very high, the project is now active. Logstash is a system for log collection and analysis, and the architecture is designed to be flexible enough to meet the needs of a
Flume
Twitter Zipkin
Storm
These projects are powerful, but are too complex for many teams to configure and deploy, and recommend lightweight download-ready scenarios, such as the Logstash+elasticsearch+kibana (LEK) combination, before the system is large enough to a certain extent.For the log, the most common need is to collect, query, display, is corresponding to Logstash, Elasticsearch, Kib
date {#同上}
#定义客户端的IP是哪个字段 (the data format defined above)
GeoIP {
Source = "ClientIP"
}}
Also has the client's UA, because the UA format is more, Logstash also automatically analyzes, extracts the operating system and so on related information
#定义客户端设备是哪一个字段
useragent {
Source = "Device"
target = "Userdevice"
}
Which fields are integral type, also need to tell Logstash, for lat
I. Introduction of Logstash
Logstash is an open source data collection engine with real-time pipeline capabilities. Logstash can dynamically unify data from different data sources and standardize the data to the destination of your choice.
Second, Logstash processing process
log
filter (not required)--outputs outputEach phase is worked with a number of plugins , such as file, Elasticsearch, Redis, and so on.Each stage can also be specified in a variety of ways , such as output can be output to elasticsearch, or can be specified to stdout in the console printing.Thanks to this plug-in organization, Logstash becomes easy to scale and cust
stages:
input---process filter (not required)--outputs output
Each phase is worked with a number of plugins, such as file, Elasticsearch, Redis, and so on.
Each stage can also be specified in a variety of ways, such as output can be output to elasticsearch, or can be specified to stdout in the console printing.
Thanks to this plug-in organization,
logstash.conf paste in new file
Input {File {Type = "Nginx_access"Path = "D:\nginx\logs\access.log"}}Output {Elasticsearch {hosts = ["192.168.10.105:9200"]index = "access-%{+yyyy. MM.DD} "}stdout {codec = Json_lines}}
Go to the Bin folder to executeCommand 1 Logstash.bat agent–f. /config/logstash.confCommand 2 logstash.bat-f: /config/logstash.confStart Logstash If the error will be logstash.b
1, Logstash end
Close the rsyslog of the Logstash machine and release the 514 port number
[Root@node1 config]# systemctl stop Rsyslog
[root@node1 config]# systemctl status Rsyslog
Rsyslog.service-sys TEM Logging Service
loaded:loaded (/usr/lib/systemd/system/rsyslog.service; enabled; vendor preset:enabled)
Active:inactive (dead) since Thu 2018-04-26 14:32:34 CST; 1min 58s ago
process:3915 execstart
The previous chapter introduced the use of Logstash, this article continued in-depth, introduced the most commonly used input plug-in--file.
This plug-in can be read from the specified directory or file, input to the pipeline processing, is also the core of Logstash plug-in, most of the use of the scene will be used in this plug-in, so here in detail the meaning of each parameter and use.
Minimized
In addition to accessing the log, the log is processed, which is written mostly by programs, such as log4j. The most important difference between a run-time log and an access log is that the runtime logs are multiple lines, that is, multiple lines in a row can express a meaning.In filter, add the following code:Filter {Multiline {}}If you can do it on multiple lines, it is easy to split them into fields.Field Properties:For multiline plug-ins, there are three settings that are important: negate,
"Config-mysql/test02.sql" statement "SELECT * from my_into_es" schedule = "* * * * * *" #索引的类型 Type = "My_into_es_type"}}filter {json {Source = "message" Remove_field = ["Message"] }}output {elasticsearch {hosts = "127.0.0.1:9200" # index name index = = "My_into_es_index # There is an ID field in the database that needs to be associated, the ID number of the corresponding index document_id = "%{id}"} stdout {codec = Json_lines} } Now, let's
file, and write the following code: Discovery.zen.ping.multicast.enabled:false #关闭广播, if the LAN has a machine open 9300 port, the service will start Can't move.network.host:192.168.1.91 #指定主机地址, in fact, is optional, but it is better to specify that the following HTTP connection error is reported when the Kibana is integrated (visual representation of Monitored::: 9200 instead of 0.0.0.0:9200)Http.cors.allow-origin: "/.*/"Http.cors.enabled:true Thi
Logstash Plug-in:Input plugin:File: Reads the stream of events from the specified file;Use the Filewatch (Ruby Gem Library) to listen for changes to the file.. Sincedb: Records the inode of each file being monitored, major number, minor Nubmer, POS;is a simple example of collecting logs:Input {File {Path = ["/var/log/messages"]Type = "System"Start_position = "Beginning"}}Output {stdout {Codec=> Rubydebug}}[
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.