1. No log Analysis System 1.1 operation and maintenance pain points
1. Operations are constantly looking at various logs.
2. The fault has occurred before looking at the log (time issue. )
3. Many nodes, log scattered, the collection of logs became a problem.
4. Run logs, errors and other logs, no specification directory, collect difficulties.
1.2 Environmental Pain Points
1. Developers cannot log on to the online server to view detailed logs.
2. Each system has a log, log data scattered difficult to find.
3. The log data is large, the query speed is slow, the data is not real-time.
1.3 Resolving pain points
1. Collection (Logstash)
2. Storage (Elasticsearch, Redis, Kafka)
3. Search + statistics + Show (Kibana)
4. Alarm, data analysis (Zabbix)
2.ElkStack Introduction
For the log, the most common need is to collect, store, query, display, the open source community just have the corresponding open source project: Logstash (Collection), Elasticsearch (storage + search), Kibana (display), We call these three combined technologies a elkstack, so elkstack refers to the combination of Elasticsearch, Logstash, and Kibana technology stacks, and a common architecture looks like this:
650) this.width=650; "src=" http://cdn.xuliangwei.com/elk-01.png "alt=" Elk frame composition "title=" "style=" Height:auto; vertical-align:middle;border:0px; "/>
Elk Frame Composition
3.ElkStack Environment
1.node1 and Node2 for Elasticsearch clusters (no logstash deployed)
2.node3 collection objects, Nginx, Java, TCP, syslog and other logs
3.node4 writes Logstash logs to Redis, reducing program-to-Elasticsearch dependencies while implementing program decoupling and schema expansion.
4. The collection host needs to deploy Logstash.
host name |
IP | . th style= "padding:8px;line-height:20px;text-align:center;vertical-align:bottom;border-top:0px;" >JVM
memory |
service |
Node1.com |
192.168.90.201 |
32G |
64G |
Elasticsearch, Kibana |
Node2.com |
192.168.90.202 |
32G |
64G |
Elasticsearch, Kibana |
Node3.com |
192.168.90.203 |
32G |
64G |
Logstash, service and program log |
Node4.com |
192.168.90.204 |
32G |
64G |
Logstash, Redis (Message Queuing) |
4.ElkStack deployment
Elasticsearch, requires a Java environment, so install it directly using Yum.
1. Installing Java
[[email protected] ~]# yum install java[[email protected] ~]# java-versionopenjdk version "1.8.0_101" OpenJDK Runtime Envi Ronment (build 1.8.0_101-b13) OpenJDK 64-bit Server VM (build 25.101-b13, Mixed mode)
2. Download and install GPG key
# [email protected] ~]# rpm--import https://artifacts.elastic.co/GPG-KEY-elasticsearch [[email protected] ~]# rpm--IMP ORT Https://packages.elastic.co/GPG-KEY-elasticsearch
3. Add Elasticsearch, Logstash, Kibana's yum repositories
# add Elasticsearch's Yum warehouse [[email protected] ~]# cat /etc/yum.repos.d/ Elasticsearch.repo[elasticsearch-2.x]name=elasticsearch repository for 2.x packagesbaseurl =http://packages.elastic.co/elasticsearch/2.x/centosgpgcheck=1gpgkey=http://packages.elastic.co/ gpg-key-elasticsearchenabled=1[logstash-5.x]name=elastic repository for 5.x packagesbaseurl=https://artifacts.elastic.co/packages/5.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/ GPG-KEY-ELASTICSEARCHENABLED=1AUTOREFRESH=1TYPE=RPM-MD
4. Installing Elasticsearch
[email protected] ~]# yum install-y elasticsearch[[email protected] ~]# yum install-y logstash[[email protected] ~]# Yu M install-y Kibana
5.yum installation requires configuration limits
[Email protected] ~]# Vim/etc/security/limits.confelasticsearch soft memlock unlimitedelasticsearch hard Memlock Unlimited
4.1 Configuring Elasticsearch
[Email protected] ~]# mkdir-p/data/es-data #创建es数据目录 [[email protected] ~]# chown-r elasticsearch.elasticsearch/data/e s-data/#授权 [[email protected]/]# grep ' ^[a-z] '/etc/elasticsearch/elasticsearch.ymlcluster.name:elk-cluter # Cluster name Node.name:linux-node1 #节点的名称path. Data:/data/es-data #数据存放路径path. Logs:/var/log/elasticsearch/# Log storage log bootstrap.mlockall:true #不使用swap分区, lock Memory network.host:192.168.90.201 #允许访问的IPhttp. port:9200 #elasticsearch访问端 Mouth
4.2 Running Elasticsearch
1. Start Elasticsearch
[Email protected] ~]# systemctl start Elasticsearch
2. Visit: Elasticsearch_url: "http://es-mon-1:9200"
{"Name": "Linux-node1", "cluster_name": "Elk-cluter", "version": {"number": "2.3.5", "Build_hash": "90f439ff60a3c0f497f91663701e64ccd01edbb4", "Build_timestamp": "2016-07-27t10:36:52z", "Build_snapshot": false, "lucene_version": "5.5.0"}, "tagline": "You Know, for Search"}
4.3Elasticsearch Plug-in
1. Installing the Elasticsearch Cluster Management plug-in
[Email protected] ~]#/usr/share/elasticsearch/bin/plugin install Mobz/elasticsearch-head
To access the Head cluster plugin: http://ES_IP:9200/_plugin/head/
650) this.width=650; "src=" Http://cdn.xuliangwei.com/es-02.png "alt=" Es_head plug-in "title=" "style=" Height:auto; vertical-align:middle;border:0px; "/>
Es_head Plug-in
2. Installing the Elasticsearch Monitor plugin
[Email protected] plugins]#/usr/share/elasticsearch/bin/plugin install Lmenezes/elasticsearch-kopf
Access to the Kopf monitor plugin: Http://ES_IP:9200/_plugin/kopf
650) this.width=650; "src=" Http://cdn.xuliangwei.com/es-03.png "alt=" Kopf monitoring plugin "style=" Height:auto;vertical-align: middle;border:0px; "/>
Kopf Monitoring plug-in
4.4elasticsearch Cluster
1.linux-node2 configuration of an identical node, through multicast communication, will be found through the cluster, if not through the multicast query, modified to unicast can
[[email protected] ~]# grep "^[a-z]"/etc/elasticsearch/elasticsearch.ymlcluster.name:elk-cluternode.name: Linux-node2path.data:/data/es-datapath.logs:/var/log/elasticsearch/bootstrap.mlockall:truenetwork.host: 0.0.0.0http.port:9200discovery.zen.ping.unicast.hosts: ["192.168.90.201", "192.168.90.202"] #单播 (Configure a single, Production can use multicast mode)
Elkstack Chapter (1)--elasticsearch