http://nkcoder.github.io/blog/20141106/elkr-log-platform-deploy-ha/
1. Architecture for highly available scenarios
In the previous article using Elasticsearch+logstash+kibana+redis to build a log management service describes the overall framework of log services and the deployment of various components, this article mainly discusses the Log service framework of
Halo, the previous period of time installed logstash,rpm installation, after installation, want to start the Apache way to start Logstash, and then use the service Logstash start start, but prompted not to change the file or directory,
Depressed, a period of time, I was directly started with the command line, and then
process.
Drop: discard some events and do not process them. For example, debug events.
Clone: copy the event. You can also add or remove fields in this process.
Geoip: Add Geographic Information (for the front-end kibana graphical display)
Outputs
Outputs is the final component of the logstash processing pipeline. An event can be output in multiple ways during processing, but once all outputs are executed, this event completes the lifecycle. Some
Windows system:1, installation Logstash1.1 access to the official website Download Zip package[1] https://artifacts.elastic.co/downloads/logstash/logstash-6.3.2.zip 6.3.2 versionif you want to download the latest or other version, you can go to the official website and select the download page[2] https://www.elastic.co/products/logstash
The concept and characteristics of 1.logstash.Concept: Logstash is a tool for data acquisition, processing, and transmission (output).Characteristics:-Centralized processing of all types of data-Normalization of data in different patterns and formats-Rapid expansion of custom log formats-Easily add plugins for custom data sources2.logstash installation configuration.①. Download and install[Email protected]
plan to save data efficiently, and can easily and simply query ... Elasticsearch is a good way. Yes, there is a suspicion of advertising here, hehe. File: Saves the event data to a document. Graphite: Send event data to a graphical component, a popular open source storage graphical presentation component. http://graphite.wikidot.com/. STATSD:STATSD is a statistical service, such as technical and time statistics, with UDP communication, aggregating on
: ["kibana.aniu.co:5044"] # Modify the connection mode to Logstash on Elk
ssl.certificate_authorities: ["/ETC/PKI/TLS/CERTS/LOGSTASH-FORWARDER.CRT"] # New
Filebeat configuration file is in YAML format, note the indentation start filebeat
sudo systemctl start filebeat
sudo systemctl enable filebeat
Note: The client premise is configured to complete the Elasticsearch
files, forwarding
The operating principle is as follows:
first, the test environment planning diagram
Operating system centos6.5 x86_64
Elk server:192.168.3.17
To avoid interference, turn off the firewall and SELinux
Service Iptables off
Setenforce 0
Three machines need to modify the Hosts file
Cat/etc/hosts
192.168.3.17 elk.chinasoft.com
192.168.3.18 rsyslog.chinasoft.com
192.168.3.13 nginx.chinasoft.com
Modify Host Name:
Ho
the logs together to the full-text search service Elasticsearch, you can use Elasticsearch to customize the search by Kibana to combine custom search for page presentation.4. Service distributionHost a 192.168.0.100 Elasticsearch+logstash-server+kinaba+redis Host B 192.168.0.101 logstash-agentIi. start of Deployment S
you to collect, analyze, and store your logs for later use (e.g., search).
Kibana is also an open source and free tool, and he kibana a friendly Web interface for Logstash and Elasticsearch, which can help you summarize, analyze, and search important data logs.
Elk work flow is as follows:
Deploy Logstash on all services that need to collect logs, as Logstash a
SummaryWhen we use Logsatsh to write the configuration file, if we read too many files, the matching is too much, will make the configuration file hundreds of thousands of lines of code, may cause reading and modification difficulties. At this time, we can put the configuration file input, filter, output in a different configuration file, or even the input, filter, output again separated, put in a different file.At this time, the later need to delete and change the contents of the search, it is
A single process Logstash can implement read, parse, and output processing of the data. But in a production environment, running the Logstash process from each application server and sending the data directly to Elasticsearch is not the first choice: first, excessive client connections are an additional pressure on Elasticsearch; second, network jitter can affect Logsta
Original address: http://www.cnblogs.com/saintaxl/p/3946667.htmlIn short, his specific workflow is to Logstash agent to monitor and filter the log, the filtered log content to Redis (here Redis only processing queues do not store), Logstash Index collects the logs together to the full-text search service Elasticsearch can use Elasticsearch to customize the search
Logstash-forwarder (formerly known as Lumberjack) is a log sending end written in the Go language,Mainly for some of the machine performance is insufficient, have the performance OCD patient prepares.main functions :By configuring the trust relationship, the log of the monitored machine is encrypted and sent to Logstash,Reduce the performance of the collected log machine to consume, equivalent to the calcul
occurs, the service starts normallyTest Logstash interacting with Elasticsearch data/app/logstash/bin/logstash-e ' input {stdin {}} output {elasticsearch {host = 192.168.1.140}} 'Enter you knowCurl ' Http://192.168.1.140:9200/_search?pretty ' # if there is output and no error indicates successful server interactionNot
Redis server is the Logstash official recommended broker choice. The Broker role also means that both input and output plugins are present. Here we will first learn the input plugin.
Logstash::inputs::redis supports three types of data_type (in fact, Redis_type), and different data types lead to the actual use of different Redis command operations: List = Blpop Channel = SUBSCRIBE Pattern_channel = Psubscri
$ Bin/elasticsearch is easy to decompress. Next, let's take a look at the effect. First, start the es service, switch to the elasticsearch directory, and run elasticsearch under bin.
cd /search/elasticsearch/elasticsearch-0.90.5/bin./elasticsearch start
Access the default port 9200
curl -X GET http://localhost:9200
3. Start the service
# elasticsearch-1.1.1/bin/elasticsearch #
Type in logstash, logstash typeTypes in logstash
Array
Boolean
Bytes
Codec
Hash
Number
Password
Path
String
Array
An array can be a single string value or multiple values. If you specify the same setting multiple times, it appends to the array.Example:
path => [ "/var/log/messages", "/var/log/*.log" ]path => "/data/mysql/mysql.log"Boolean
Boolean, true,
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.