1. Overview
Today then "Elasticsearch actual combat-log monitoring platform" a article to share the follow-up study, in the "Elasticsearch real-log monitoring platform" in the introduction of a log monitoring platform architecture, then to share how to build a platform for deployment, Make an introductory introduction to everyone. Here is today's share directory:
- Build a Deployment Elastic kit
- Running the cluster
- Preview
Let's start today's content sharing.
2. Build a Deployment Elastic kit
Building the Elastic kit is simple, let's start by building a deployment-related kit, first we prepare the necessary environment.
2.1 Basic Software
You can download the corresponding installation package Elastic The official website, the address is as follows:
[]
In addition, a basic environment is the need to use the Jdk,es cluster dependent JDK, the address is as follows:
[]
2.2 Logstash Deployment
Here we deploy the Logstash service in the central node with the following core configuration file:
Input {redis {host="10.211.55.18"Port=6379type="Redis-input"data_type="List"Key="Key_count"}}filter {grok {match= ["message","%{iporhost:client} (%{user:ident}|-) (%{user:auth}|-) \[%{httpdate:timestamp}\] \ "(?:%{word:verb}%{NOTSPACE: Request} (?: Http/%{number:http_version})? | -) \ "%{number:response}%{number:bytes} \" (%{qs:referrer}|-) \ "\" (%{qs:agent}|-) \ ""]} kv {source="Request"Field_split="&?"Value_split="="} urldecode {All_fields=true}}output {elasticsearch {cluster="Elasticsearch"codec="JSON"Protocol="http" } }
Its proxy nodes, deployed on top of the log production node, are as follows:
Input {file{type="Type_count"Path= ["/home/hadoop/dir/portal/t_customer_access.log"] Exclude= ["*.gz","Access.log"]}}output {stdout {} redis {host="10.211.55.18"Port=6379data_type="List"Key="Key_count" } }
2.3 Elasticsearch Deployment
Next, we deploy the ES cluster, which is simpler to configure, with the following configuration:
" Node1 "
Here I only configured its node name information, the cluster name using the default, if you need to configure additional information can be self-processing, it is important to note that when the practical SCP command is distributed to other nodes, it is necessary to modify the value of its property, keeping the Node.name value of each node different.
In addition, you can use the following command when installing the plug-in for the plugin ES cluster:
sudo elasticsearch/bin/plugin-install mobz/elasticsearch-head
sudo elasticsearch/bin/plugin-Install Lukas-vlcek/bigdesk
Its corresponding Web UI interface looks like this:
For other ES cluster plug-ins, building can be based on the actual business needs of the selective installation, here is not more than repeat.
2.4 Kibana Deployment
Here we need to install a can go to visualize the ES cluster Data tool, here we choose Kibana tool to visualize our data, its installation is relatively simple, just configure the corresponding core files can be configured as follows:
" http://10.211.55.18:9200 "
Here to visualize the data in the Node1 node ES cluster.
3. Running the cluster
Next, we start the entire system, and the boot steps are as follows:
[Email protected] ~]$ Redis-server &
- Start the agent node (start shipper on its agent node, respectively)
Bin/logstash Agent--verbose--config conf/shipper.conf--log Logs/stdout.log &
Bin/logstash Agent--verbose--config conf/central.conf--log Logs/stdout.log &
- Start ES cluster (Start on ES node, respectively)
Bin/elasticsearch start
Bin/kibana
4. Preview
Here, we can preview the collected logs, the log information I only extracted a few, as follows:
We can also use the filter feature to select the data results we need to observe, here we filter the IP and AppName properties for observation, as shown in:
5. Summary
It is important to note that if we start the Kibana service for the first time, the collection log information is empty, and when we create the index, the Create button in the interface under the Settings module is grayed out, resulting in the inability to create Here you need to make sure that we have logs that have been collected and stored in the ES cluster. For example, since I have collected storage logs into the ES cluster, the button is rendered in a green state for click Creation. As shown in the following:
6. Concluding remarks
This blog is to share with you here, if you study in the process of learning what is the problem, you can add groups to discuss or send e-mail to me, I will do my best to answer for you, with June encouragement!
Elasticsearch Combat-Getting Started