Today is November 06, 2015, get up in the morning, Beijing weather unexpectedly snowed, yes, in recent years has rarely seen snow, think of the winter as a child, memories of the shadow is still vivid.
To get to the point, the article introduced the basic knowledge of Logstash and introductory demo, this article introduces several more commonly used commands and cases
Through the previous introduction, we generally know the entire
Official website https://www.elastic.coSoftware version: Logstash 2.2.0 all Pluginselasticsearch 2.2.0Kibana 4.4.0Note: This environment becomes Centos6.5 64 bits, the single machine does the test, the specific configuration is simple.1.Logstash installation ConfigurationUnzip to/usr/local/logstash-2.2.0/Logstash confi
Some logs, such as Apache, do not support JSON with Grok plugins like NginxGrok using regular expressions for row-matching splitsThe predefined locations are defined in the/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patternsApache in File Grok-patternsView official documentsHttps://www.elastic.co/guide/en/logstash/current/plugins-filte
Elk Cloner was the first computer virus known to have been widely spread. Richie Skrenta, a 15-year-old high school student, wrote the virus for the Apple II operating system, which was stored on a floppy disk. When the computer starts a floppy disk infected with Elk Cloner, the virus begins to function and then copies itself to any uninfected floppy disk that is accessed. Because the computer at that time
Elk Cloner was the first computer virus known to have been widely spread. Richie Skrenta, a 15-year-old high school student, wrote the virus for the Apple II operating system, which was stored on a floppy disk. When the computer starts a floppy disk infected with Elk Cloner, the virus begins to function and then copies itself to any uninfected floppy disk that is accessed. Because the computer at that time
Nodejs
NPM install installation environment
Logstash log analysis and graphical display
Small search engines and graphical display
Ruby-developed tools are encapsulated into jar packages in the Java environment.
Logstash Analysis
Read logs from the back to the front in real time
Elastic search Storage
Kibana web page
Java-jar logstash-1.3.2-fla
Flume compared with Logstash, the personal experience is as follows:
Logstash more emphasis on the preprocessing of the field, while flume emphasis on data transmission;
Logstash has dozens of plug-ins, flexible configuration, Flume is to emphasize the user's custom development (source and sink kind also has ten or twenty, the channel is relatively s
When we use Logstash to collect logs, we usually use the dynamic Index template that comes with logstash, although we can push our log data to the Elasticsearch index cluster without any custom action, but when we query, we find that The default index template often puts us in a field that does not need a word breaker, so that our more important aggregated statistics are inaccurate:For example, if there are
Logstash 5.0 starts with an API that outputs the metrics and status monitoring of its own processes.
Official documents:Https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html#monitoring
Node Info APIHttps://www.elastic.co/guide/en/logstash/current/node-info-api.htmlPipeline Gets pipeline-specific information and settings.OS Gets Node-level info
1. List Logstash-pluginsBin/logstash-plugin List******Logstash-output-kafkaLogstash-output-nagiosLogstash-output-nullLogstash-output-pagerdutyLogstash-output-pipeLogstash-output-rabbitmqLogstash-output-redis******2. Plugin to install MongoDB output in the output formatInstall Logstash-output-mongodb3. Configure the out
-flume-1.5.2-bin/tracklog-kafka/checkpointAgent.channels.m1.datadirs=/opt/modules/apache-flume-1.5.2-bin/tracklog-kafka/datadirAgent.channels.m1.transactionCapacity = 1000000agent.channels.m1.capacity=1000000Agent.channels.m1.checkpointInterval = 30000
Second, the data into the KafkaThe above collect topic need to be Kafka in advance, the other steps into the Kafka has been configured in the Collect.To create a topic statement reference:
%{kafka_home}/bin/kafka-topics.sh-
Data acquisition of Kafka and Logstash
Based on Logstash run-through Kafka still need to pay attention to a lot of things, the most important thing is to understand the principle of Kafka.
Logstash Working principleSince Kafka uses decoupled design ideas, it is not the original publication subscription, the producer is responsible for generating the
The Nginx Access log we collected through Logstash already contains the data for the client IP (REMOTE_ADDR), but only this IP is not enough, the location of the Kibana to display the requested source needs to be implemented by GEOIP database. GeoIP is the most common free IP address classification query library, but also has a pay version can be purchased. GeoIP Library can provide the corresponding geographical information according to the IP addres
Logstash cannot read redis data
A problem occurred when constructing logsatsh + redis + elasticsearch today. After nearly one hour of troubleshooting, the problem was finally solved. Record it.
The environment is like this. A client sends data to redis on the server, and logstash on the server reads redis data and stores it in elasticsearch.
The initial problem is that on the server side, the log sent from
Configure GeoIP in logstash to parse geographic information, logstashgeoip
The GeoIP database configured in logstash parses the ip address. Here, an open source ip data source is used to analyze the ip address of the client. The official website is here: MAXMIND
DownloadGeoLiteCityDatabase
Wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gztar-zxvf GeoLite2-City.tar.gzcp GeoLite2
\elasticsearch\logs\* # Exclude_lines: ["^dbg"] #include_lines: ["^err", "^warn"]Multiple paths can be configured here, and filtering with regular log extraction3, output log path:Filebeat output can be available in multiple destinations, ES, LogstashElasticsearch#--------------------------Elasticsearch output------------------------------#output. Elasticsearch: # Array of hosts to connect to. # hosts: ["localhost:9200"] # Optional protocol and Basic auth credentials. "https" "elastic"
Halo, the previous period of time installed logstash,rpm installation, after installation, want to start the Apache way to start Logstash, and then use the service Logstash start start, but prompted not to change the file or directory,
Depressed, a period of time, I was directly started with the command line, and then yesterday in Centos7 installation can use Sy
Index fields are indexed using automatic detection in ES, such as IP, date auto-detection (default on), Auto-detect (default off) for dynamic mapping to automatically index documents, and when specific types of fields need to be specified, mapping can be used to define mappings in index generation.
The settings for the default index in Logstash are template-based, Logstash for indexer roles. First we need t
Background: At present, there is a database data about 300 million in the business. If the query directly from the database, wait more than 15 minutes, the user often want to view the data, can only write SQL in the database directly query after drinking a few cups of tea, the results have not come out. The user sees the use of the ES cluster in our project and wants to synchronize the data in the database to the ES cluster.Software version: logstash-
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.