The concept and characteristics of 1.logstash.
Concept: Logstash is a tool for processing, and transmission (output).
Characteristics:
-Centralized processing of all types of data
-Normalization of data in different patterns and formats
-Rapid expansion of custom log formats
-Easily add plugins for custom data sources
2.logstash installation configuration.
①. Download and install
[Email protected] ~]# wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.3.4-1.noarch.rpm
[Email protected] ~]# RPM-IVH logstash-2.3.4-1.noarch.rpm
②. Simple test Logstash, enter "Hello,xkops" after startup.
[[email protected] ~]#/opt/logstash/bin/logstash-e ' input{stdin{}} output{stdout{} '
* Hint: If the output "Hello,xkops", it proves that Logstash started successfully.
③. Start with a service.
[[Email protected] ~]# service Logstash start
Summary: Logstash Three startup modes,-e sting type startup,-f specifies that the configuration file starts, service starts.
Detailed description of the 3.logstash configuration statement.
The Logstash configuration file contains three configuration sections: input{}, filter{}, output{}.
{} defines a region where one or more plugins can be defined to collect, process, and output data through plugins.
Data type:
Boolean value type: Ssl_enable = True
Byte type: bytes = "1MiB"
String Type: Name = "Xkops"
Numeric type: port = 22
Array: match = = ["datetime", "UNIX"]
Hash: options = {key1 = "value1", Key2 = "value2"}
Codec: codec = "JSON"
Path: File_path = "/tmp/filename"
NOTES: #
Condition Judgment:
equals: = =
Not equal to:! =
Less than: <
Greater than: >
Less than equals: <=
Greater than or equal to: >=
Match Regular: =~
Mismatch Regular:!~
Contains: in
Not included: Not in
With: and
Or: OR
Non-with: NAND
Non-or: XOR
Compound expression: ()
Take anti-match:! ()
4.logstash commonly used plugins.
Type:
Input class: That is, the plug-in defined in the input area.
Codec class: Format for processing data, such as Plain,json,json_lines. This can be defined in the input, output area.
Filter class: This is the plug-in defined within the filter area.
Output class: The plug-in that is defined in the output region.
Various plug-ins address: Https://github.com/logstash-plugins
----------------------Input class plug-in-------------------------------
Input class plug-in, commonly used plug-ins: File, TCP, UDP, Syslog,beats and so on.
①.file plugin:
The File plugin field explains:
codec = #可选项, the default is plain, you can set other encoding methods.
Discover_interval = #可选项, logstash how often do you check for new files under path, default 15s.
Exclude = #可选项, exclude files that you do not want to listen to under path.
Sincedb_path = #可选项, a data file that records files and the location of the file read information. ~/.sincedb_xxxx
Sincedb_write_interval = #可选项, logstash how often write a sincedb file, default 15s.
Stat_interval = #可选项, logstash how often to check for changes in the listening file, the default 1s.
Start_position = #可选项, logstash from which location to read the file data, by default from the tail, the value is: End. Initial import, set to: Beginning.
Path = #必选项, configuration file path, can be defined multiple.
tags = #可选项, in the process of data processing, by the specific plug-in to add or remove the tag.
Type = #可选项, custom processing time types. Like Nginxlog.
Instance:
[Email protected] conf.d]# Cat/tmp/file{1,2}.log
This was test File-plugin in File1.log
This was test File-plugin in File2.log
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat file.conf
input{ file{ start_position = "Beginning" Path = = ["/tmp/file1.log", "/tmp/file2.log"] type = > ' FileLog ' }}output{ stdout{}}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f file.conf-t
Configuration OK
[Email protected] conf.d]#/opt/logstash/bin/logstash-f file.conf
Settings:default Pipeline Workers:1
Pipeline Main started
2016-07-16T02:50:07.410Z Node1 This was test file-plugin in File1.log
2016-07-16t02:50:07.428z Node1 This was test file-plugin in File2.log
* Hint: the ability to output content proves that the file plugin is working properly.
②.TCP plugin:
The TCP plug-in field explains:
Add_field = #可选项, default {}.
codec = #可选项, default plain.
Data_timeout = #可选项, Default-1.
Host = #可选项, default 0.0.0.0.
mode = #可选项, one of ["Server", "Client"], default to server.
Port = #必选, ports.
Ssl_cacert = #可选项, define the associated path.
Ssl_cert = #可选项, define the associated path.
Ssl_enable = #可选项, false by default.
Ssl_key = #可选项, define the associated path.
Ssl_key_passphrase = #可选项, default nil
Ssl_verify = #可选项, false by default.
tags = #可选项
Type = #可选项
Instance:
[Email protected] conf.d]# Cat/tmp/tcp.log
This was test Tcp-plugin in Tcp.log
Send Tcplog data
Output Tcplog Data
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat tcp.conf
input{ tcp{ host = "127.0.0.1" port = 8888 type = "Tcplog" }}output{stdout{ }}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f tcp.conf
At this point, open another terminal, use NC to open a TCP port 8888, and push the data to port 8888.
[[Email protected] conf.d]# NC 127.0.0.1 8888 </tmp/tcp.log
* Hint: The previous terminal if the data appears, it proves that the TCP plug-in is working properly.
③UDP plugin:
UDP plugin Field Explanation:
Add_field = #可选项, default {}.
Host = #可选项, default 0.0.0.0.
Queue_size = #默认2000
tags = #可选项
Type = #可选项
Workers = #默认为2
Instance:
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat udp.conf
input{ udp{ host = "127.0.0.1" port = 9999 }}output{ stdout{}}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f udp.conf
Open another terminal to execute the following script and enter the content: "Hello,udplog".
[Email protected] conf.d]# cat/tmp/udpclient.py
#/usr/bin/env pythonimport sockethost = "127.0.0.1" port = 9999file_input = Raw_input ("Please input UDP log:") s = Socket.s Ocket (Socket.af_inet,socket. SOCK_DGRAM) S.sendto (File_input, (Host,port))
* Tip: If the previous terminal receives the log, it proves that the UDP plugin is working correctly.
④.syslog plugin:
Instance:
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat syslog.conf
input{ syslog{ host = "127.0.0.1" port = 518 type = "Syslog" }}output{ stdout{}}
[Email protected] conf.d]# echo ' * * @@127.0.0.1:518 ' >>/etc/rsyslog.conf
[Email protected] conf.d]#/etc/init.d/rsyslog restart
Turn off the system logger: [OK]
Start the system logger: [OK]
[Email protected] conf.d]#/opt/logstash/bin/logstash-f syslog.conf
To write a log to the system using the Logger command:
[Email protected] conf.d]# Logger
* Tip: Enter the content here and see if the previous terminal will have content output, if the output proves that the Syslog plugin is working properly.
--------------------------Codec class plug-in------------------------------------
Codec class plug-ins, commonly used plugins: plain, JSON, Json_lines, Rubydebug, multiline and so on.
①.plain plugin:
Instance:
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat plain.conf
input{ stdin{ codec = "plain" }}output{ stdout{}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f plain.conf
Enter information to view the output.
②.json plugin:
Instance:
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat json.conf
input{ stdin{}}output{ stdout{ codec = "JSON" }}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f json.conf
Enter information to view the output.
③.json_lines plugin: (used when JSON text is too long)
Instance:
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat jsonlines.conf
input{ tcp{ host = "127.0.0.1" port = 8888 codec = "Json_lines" }}output{ stdout{}}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f jsonlines.conf
Start a new terminal and execute the following command.
[Email protected] conf.d]# Cat/tmp/jsonlines.txt
You run a price alerting platform which allows Price-savvy customers to specify a the rule like "I am interested in buying a s Pecific electronic gadget and I want to being notified if the price of gadget falls below $X from any vendor within the next Month ". In this case you can scrape vendor prices, push them to Elasticsearch and use its reverse-search (percolator) capability To match price movements against customer queries and eventually push the alerts out to the customer once matches is Fou nd.
[[Email protected] conf.d]# NC 127.0.0.1 8888 </tmp/jsonlines.txt
* Tip: Observe the output of the previous terminal, if the normal output, then the Json_lines plug-in works normally.
④.rubedebug plugin:
Instance:
[email protected] conf.d]# cat rubydebug.conf
input{ stdin{ codec = "JSON" }}output{ stdout{ codec = "Rubydebug" }}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f rubydebug.conf
Enter the JSON string to view the output effect.
JSON string: {"name": "Xkops", "Age": "25"}
⑤.multiline Plugin: (processing error log)
Multiline plugin fields:
charset = #字符编码, optional
Max_bytes = #bytes类型, set the maximum number of bytes, optional
Max_lines = #number类型, set the maximum number of rows, default is 500 rows, optional
Multiline_tag = #string类型, set an event label, default is "multiline", optional
Pattern = #string Type, set a matching regular expression, required
Patterns_dir = #array类型, you can set multiple regular expressions, optionally
negate = #boolean类型, set forward or reverse matching, default is false, optional
What + #设置未匹配的内容是向前合并还是向后合并, Previous, next two value selection, required
Error log:
[16-07-2016 22:54:01] PHP Warning:unknown Exception in/xxx/test/index.php:99
111111111111111111
222222222222222222
[16-07-2016 23:19:43] PHP Warning:unknown Exception in/xxx/test/index.php:93
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat codecmultilines.conf
input{ stdin{ codec = multiline{ pattern = "^\[" negate = true what = "Previous" } }} output{ stdout{}}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f codecmultilines.conf
* Tip: Enter the error log above to see the output.
---------------------Filter Class plug-in---------------------------------------------
Filter plug-ins, commonly used filter plugins: JSON, Grok, and so on.
①.json plugin:
Add_field = #hash (optional), default {}
Add_tag = #array (optional), default []
Remove_field = #array (optional), default []
Remove_tag = #array (optional), default []
Source = #string (required option)
target = #string (optional)
Instance:
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat filterjson.conf
input{ stdin{}}filter{ json{ Source = "message" #target = "Content" }}output{ stdout{ codec = "Rubydebug" }}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f filterjson.conf
Enter two strings to view the output.
{"Name": "Xkops", "Age": "25"}
Name Xkops
②.grok Plugin: Parse various unstructured log data plugins.
Grok has a rich patterns, view way:
Cat/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/grok-patterns
Https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
Instance:
Input meta-data content:
20.3.1.3 get/xkops/index.html 8838 1.323
[Email protected] conf.d]# pwd
/etc/logstash/conf.d
[email protected] conf.d]# cat filtergrok.conf
input{ stdin{}}filter{ grok{ match + = ["message", "%{ip:ip}%{word:method}%{uripath:uri}%{number: bytes}%{number:duration} "] }}output{ stdout{ codec =" Rubydebug " }}
[Email protected] conf.d]#/opt/logstash/bin/logstash-f filtergrok.conf
* Hint: Enter the above meta-data to view the output.
* Tip: Grok online tool use: https://grokdebug.herokuapp.com/
③.KV plug-in: Parse processing key-value key value pair data
--------------------Output class plug-in-----------------------------
Output plugin:
①.file Plug-in
②.TCP/UDP Plug-in
③.redis/kfaka
④.elasticsearch
Appendix:
Redis configuration:
input{ redis{ host = ' Redis-server ' port = ' 6379 ' data_type = ' list ' key = ' lb ' codec = ' json '} }
Logstash using the Action section