Elasticsearch Combat-Getting Started

Source: Internet
Author: User
Tags scp command kibana logstash

1. Overview

Today then "Elasticsearch actual combat-log monitoring platform" a article to share the follow-up study, in the "Elasticsearch real-log monitoring platform" in the introduction of a log monitoring platform architecture, then to share how to build a platform for deployment, Make an introductory introduction to everyone. Here is today's share directory:

    • Build a Deployment Elastic kit
    • Running the cluster
    • Preview

Let's start today's content sharing.

2. Build a Deployment Elastic kit

Building the Elastic kit is simple, let's start by building a deployment-related kit, first we prepare the necessary environment.

2.1 Basic Software

You can download the corresponding installation package Elastic The official website, the address is as follows:

[]

In addition, a basic environment is the need to use the Jdk,es cluster dependent JDK, the address is as follows:

[]

2.2 Logstash Deployment

Here we deploy the Logstash service in the central node with the following core configuration file:

    • Central.conf
Input {redis {host="10.211.55.18"Port=6379type="Redis-input"data_type="List"Key="Key_count"}}filter {grok {match= ["message","%{iporhost:client} (%{user:ident}|-) (%{user:auth}|-) \[%{httpdate:timestamp}\] \ "(?:%{word:verb}%{NOTSPACE: Request} (?: Http/%{number:http_version})? | -) \ "%{number:response}%{number:bytes} \" (%{qs:referrer}|-) \ "\" (%{qs:agent}|-) \ ""]} kv {source="Request"Field_split="&?"Value_split="="} urldecode {All_fields=true}}output {elasticsearch {cluster="Elasticsearch"codec="JSON"Protocol="http"    }   }

Its proxy nodes, deployed on top of the log production node, are as follows:

    • Shipper.conf
Input {file{type="Type_count"Path= ["/home/hadoop/dir/portal/t_customer_access.log"] Exclude= ["*.gz","Access.log"]}}output {stdout {} redis {host="10.211.55.18"Port=6379data_type="List"Key="Key_count"    }   }
2.3 Elasticsearch Deployment

Next, we deploy the ES cluster, which is simpler to configure, with the following configuration:

    • Elasticsearch.yml
" Node1 "

Here I only configured its node name information, the cluster name using the default, if you need to configure additional information can be self-processing, it is important to note that when the practical SCP command is distributed to other nodes, it is necessary to modify the value of its property, keeping the Node.name value of each node different.

In addition, you can use the following command when installing the plug-in for the plugin ES cluster:

    • Head Plugin
sudo elasticsearch/bin/plugin-install mobz/elasticsearch-head
    • Bigdesk Plug-in
sudo elasticsearch/bin/plugin-Install Lukas-vlcek/bigdesk

Its corresponding Web UI interface looks like this:

    • Head Plug-in interface

    • Bigdesk's interface

For other ES cluster plug-ins, building can be based on the actual business needs of the selective installation, here is not more than repeat.

2.4 Kibana Deployment

Here we need to install a can go to visualize the ES cluster Data tool, here we choose Kibana tool to visualize our data, its installation is relatively simple, just configure the corresponding core files can be configured as follows:

    • Kibana.yml

" http://10.211.55.18:9200 "

Here to visualize the data in the Node1 node ES cluster.

3. Running the cluster

Next, we start the entire system, and the boot steps are as follows:

    • Start Redis
[Email protected] ~]$ Redis-server &
    • Start the agent node (start shipper on its agent node, respectively)
Bin/logstash Agent--verbose--config conf/shipper.conf--log Logs/stdout.log &
    • Launch Center Service
Bin/logstash Agent--verbose--config conf/central.conf--log Logs/stdout.log &
    • Start ES cluster (Start on ES node, respectively)
Bin/elasticsearch start
    • Start the Kibana service
Bin/kibana
4. Preview

Here, we can preview the collected logs, the log information I only extracted a few, as follows:

We can also use the filter feature to select the data results we need to observe, here we filter the IP and AppName properties for observation, as shown in:

5. Summary

It is important to note that if we start the Kibana service for the first time, the collection log information is empty, and when we create the index, the Create button in the interface under the Settings module is grayed out, resulting in the inability to create Here you need to make sure that we have logs that have been collected and stored in the ES cluster. For example, since I have collected storage logs into the ES cluster, the button is rendered in a green state for click Creation. As shown in the following:

6. Concluding remarks

This blog is to share with you here, if you study in the process of learning what is the problem, you can add groups to discuss or send e-mail to me, I will do my best to answer for you, with June encouragement!

Elasticsearch Combat-Getting Started

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.