Write in front: In doing Elk logstash processing MySQL slow query log when the problem: 1, the test database does not have slow log, so there is no log information, resulting in ip:9200/_plugin/head/interface anomalies (suddenly appear log data, deleted the index disappeared) 2, Processing log script Problem 3, the current single-node configuration script file/usr/local/logstash-2.3.0/config/slowlog.conf "V
Template is a planning of index internal storage, reasonable control store and analyze, setting mapping is an important part of cluster optimization to improve performance. Can be passed through Curl-xget ' http://localhost:9200/twitter/ _mapping/tweet to view the mapping of an index.
There are several ways to template settings. The simplest is to post on the same way as storing data. The long-term approach is to write JSON files in the configuration path/etc/
Nodejs
NPM install installation environment
Logstash log analysis and graphical display
Small search engines and graphical display
Ruby-developed tools are encapsulated into jar packages in the Java environment.
Logstash Analysis
Read logs from the back to the front in real time
Elastic search Storage
Kibana web page
Java-jar logstash-1.3.2-fla
Logstash is a member of the elk,The Redis plugin is also a handy gadget introduced in the Logstash book.Before, with a smaller cluster deployment, not involved in Redis middleware, so it is not very clear the configuration inside,Later used to find the configuration a bit of a pit.When the first configuration, dead or alive is not connected, always error, said connection refused.But there is no problem with
started to proficient" guide. For more information, see here.
ElasticSearch latest version 2.20 released and downloaded
Full record of installation and deployment of ElasticSearch on Linux
Elasticsearch installation and usage tutorial
ElasticSearch configuration file Translation
E
"," Ignore_above ":" Doc_values ": true} nbsp NBsp NBSP,} } } }], NB Sp "Properties": { "@version": {"type": "string", "index": "Not_analy Zed "}, " GeoIP "NBSP;: { " type ":" Object ", N Bsp
"dynamic": true, "path": "Full", "Properties": { ' L
Ocation ": {" type ":" Geo_point "} } } } } }}
For example, if you have a field that stores content as IP and does not want to be automatically detected as a string type, you can
Logstash analysis httpd_logLogstash analysis: httpd_loghttpd or nginx format
Logstash supports two built-in formats: common and combined compatible with httpd.
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)COMBINEDAPACHELOG %{COMMONAPAC
Logstash has a simple batch build plugin. Generator For details, see official website: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-generator.htmlHow to use: Config file modified toInput { generator { = = [ "line1", " Line 2", "line3" ] 3 }}#下面的输出部分可以替换成其他输出插件. such as Elasticsearch or Redis,mongo. Output { stdout {codec = d
I wanted to log from a log4j process through to Logstash, and has the logging stored in Elastic search. This can is done using the code at Https://github.com/logstash/log4j-jsonevent-layout
Things easy for my test, I put the source code for Net.logstash.log4j.JSONEventLayoutV1and Net.logstash.log4j.data . Hostdata into my source tree.
I then added Json-smart-1.1.1.jar to the classpath (from Https://code.goo
it to @timestamp by date. Reference Https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-match
# date {
# match = ["LogTime", "Dd/mmm/yyyy:hh:mm:ss Z"]
# }
}else if [type] in [' Tbg_qas ', ' Mbg_pre '] {# if ... else if
}else {
drop{} # Discards the event
}
}
Output {
stdout{Codec=>rubydebug} # Direct output, debugging easy to use
# Output to Redis
Redis {
Host = ' 10.120.20.208 '
data_type = ' list '
Key =
] =~/error/{File {Path = "/diskb/bi_error_log/bi_error.log"}}elasticsearch{hosts = ["10.130.2.53:9200", "10.130.2.46:9200", "10.130.2.54:9200"]flush_size=>50000Workers = 5Index=> "Logstash-bi-tomcat-log"}}
By starting this conf file, you can import all the data into ES, can be displayed by Kibana, the specific display will not repeat, and at the same time the error log is imported into a text for th
information (memory, CPU, network, JVM, and other information). In order to do this project, I also went to find a lot of similar articles on the Internet to refer to commonly used monitoring indicators and how they do monitoring. My mission was mainly to collect information, and then save to the company's major projects in the Influxdb, and finally show up with Grafana, behind my group's ops big guy showed me the monitoring market, the interface is cool, ha ha, good!
At that time, two blog pos
First, window installation Elasticsearch installationThe client version of Elasticsearch must be consistent with the main version of the server version.1, Java Installation "slightly" 2, Elasticsearch downloadAddress: https://www.elastic.co/downloads/past-releasesSelect the appropriate version, use elasticsearch5.4.3 download zip here3, decompression
Index fields are indexed using automatic detection in ES, such as IP, date auto-detect (default on), Auto-detect (default off) for dynamic mapping to automatically index documents, and when specific types of fields need to be specified, you might use mapping to define mappings in index generation.The settings for the default index in Logstash are template-based.First we need to specify a default mapping file, the contents of the file are as follows:{
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.