Centos7 installation and configuration ELK

Source: Internet
Author: User

Centos7 installation and configuration ELK

1. server Installation

# Installing elasticsearch

Yum install java-1.8.0-openjdk ruby

Yum install elasticsearch-2.1.0.rpm

Systemctl start elasticsearch

Rpm-qc elasticsearch

Curl-X GEThttp: // localhost: 9200/

# Install redis

Yum install redis

Mkdir-p/opt/redis

Cd/opt/redis

Mkdir {db, log, etc}

Redis-server/etc/redis. conf & start the default redis configuration file

# Install kibana

Tar-zxvf kibana-5.0.0-snapshot-linux-x64.tar.gz

Music kibana-5.0.0-snapshot-linux-x64 kibana

Cp-R kibana/opt/

Vi/etc/systemd/system/kibana. service

[Service]

ExecStart =/opt/kibana/bin/kibana

[Install]

Wantedbypolicmulti-user.tar get

Systemctl start kibana

Http:/// IP: 5601

# Install logstash

Yum install logstash-2.1.0-1.noarch.rpm

Cd/etc/pki/tls

Openssl req-config/etc/pki/tls/openssl. cnf-x509-days 3650-batch-nodes-newkey rsa: 2048-keyout private/logstash-forwarder.key-out certs/logstash-forwarder.crt

Vi/etc/logstash/conf. d/logstasg-hello.conf

Input {

Lumberjack {

# The port to listen on

Port = & gt; 5000

# The paths to your ssl cert and key

Ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"

Ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"

Type => "this forwarder's file have no type! "

}

}

Output {

Elasticsearch {host => localhost}

Stdout {codec => rubydebug}

}

/Opt/logstash/bin/logstash-f/etc/logstash/conf. d/logstash-hello.conf

# Install the lnmp Environment

Yum install nginx php

Vi/etc/nginx/conf. d/logstash. conf

Server {

Listen 80;

Server_name IP;

Location /{

Proxy_pass http: // localhost: 5601;

Proxy_redirect off;

Proxy_set_header Host $ host;

Proxy_set_header X-Real-IP $ remote_addr;

Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;

Client_max_body_size 10 m;

Client_body_buffer_size 128 k;

Proxy_connect_timeout 90;

Proxy_send_timeout 90;

Proxy_read_timeout 90;

Proxy_buffer_size 4 k;

Proxy_buffers 4 32 k;

Proxy_busy_buffers_size 64 k;

Proxy_temp_file_write_size 64 k;

}

}

Systemctl start nginx

2. client-side Installation

3. elasticsearch configuration file details

Logging. yml in cat/etc/elasticsearch is configured by default. elasticsearch. yml is configured as follows:

Cluster. name: elasticsearch

Configure the elasticsearch cluster name. elasticsearch automatically discovers elasticsearch instances under the same CIDR block. If multiple clusters exist under the same CIDR block, you can use this attribute to differentiate different clusters.

Node. name: "Franz Kafka"

The node name is randomly specified as a namespace name. The node is saved in the name.txt file in the jar folder of es, and many interesting names are added by the author.

Node. master: true

Specify whether the node is eligible to be elected as a node. The default value is true. elasticsearch sets the first machine in the cluster as the master by default. If the node fails, the master is re-elected.

Node. data: true

Specifies whether the node stores index data. The default value is true.

Index. number_of_shards: 5

Set the number of default index partitions. The default value is 5.

Index. number_of_replicas: 1

Sets the number of default index copies. The default value is 1 copy.

Path. conf:/path/to/conf

Set the storage path of the configuration file, which is the config folder under the es root directory by default.

Path. data:/path/to/data

Set the storage path of the index data. By default, it is the data folder under the es root directory. You can set multiple storage paths separated by commas (,). For example:

Path. data:/path/to/data1,/path/to/data2

Path. work:/path/to/work

Set the storage path of temporary files. The default path is the work folder under the es root directory.

Path. logs:/path/to/logs

Set the log file storage path. The default path is the logs folder under the es root directory.

Path. plugins:/path/to/plugins

Set the plugin storage path. The default directory is the plugins folder under the es root directory.

Bootstrap. mlockall: true

Set to true to lock the memory. When the jvm starts swapping, the es efficiency is reduced, so to ensure that it is not swap, you can set the ES_MIN_MEM and ES_MAX_MEM environment variables to the same value, and ensure that the machine has enough memory to allocate to es. At the same time, elasticsearch processes must be allowed to lock the memory. in linux, you can run the 'ulimit-l unlimited' command.

Network. bind_host: 192.168.0.1

Set the bound IP address, which can be ipv4 or ipv6. The default value is 0.0.0.0.

Network. publish_host: 192.168.0.1

Set the IP address for interaction between other nodes and the node. If this parameter is not set, the system automatically determines that the value must be a real IP address.

Network. host: 192.168.0.1

This parameter is used to set the bind_host and publish_host parameters at the same time.

Transport. tcp. port: 9300

Set the tcp port for interaction between nodes. The default value is 9300.

Transport. tcp. compress: true

Sets whether to compress the data during tcp transmission. The default value is false.

Http. port: 9200

Set the http port of the external service. The default value is 9200.

Http. max_content_length: 100 mb

Sets the maximum content size. The default value is 100 mb.

Http. enabled: false

Whether to Use http to provide external services. The default value is true.

Gateway. type: local

Gateway type. The default value is local, which is the local file system. It can be set to local file system, distributed file system, hadoop HDFS, and amazon s3 server, other File System settings will be detailed next time.

Gateway. recover_after_nodes: 1

Set data recovery when N nodes in the cluster are started. The default value is 1.

Gateway. recover_after_time: 5 m

Set the timeout time of the initialization data recovery process. The default value is 5 minutes.

Gateway. expected_nodes: 2

Set the number of nodes in the cluster. The default value is 2. Once the N nodes are started, the data will be restored immediately.

Cluster. routing. allocation. node_initial_primaries_recoveries: 4

Number of concurrent recovery threads during data restoration initialization. The default value is 4.

Cluster. routing. allocation. node_concurrent_recoveries: 2

Number of concurrent recovery threads when adding or deleting nodes or Server Load balancer. The default value is 4.

Indices. recovery. max_size_per_sec: 0

Set the bandwidth in the data recovery time limit. For example, if the bandwidth is 100 mb, the default value is 0, which means no limit.

Indices. recovery. concurrent_streams: 5

Set this parameter to limit the maximum number of concurrent streams opened when data is restored from other shards. The default value is 5.

Discovery. zen. minimum_master_nodes: 1

Set this parameter to ensure that the nodes in the cluster know the other N master-qualified nodes. The default value is 1. For large clusters, you can set a larger value (2-4)

Discovery. zen. ping. timeout: 3 s

Set the ping connection timeout time when other nodes are automatically found in the cluster. The default value is 3 seconds. For poor network environments, you can set a high value to prevent errors during automatic discovery.

Discovery. zen. ping. multicast. enabled: false

Set whether to enable multicast discovery nodes. The default value is true.

Discovery. zen. ping. unicast. hosts: ["host1", "host2: port", "host3 [portX-portY]"]

Set the initial list of master nodes in the cluster. You can use these nodes to automatically discover the nodes that are newly added to the cluster.

Below are some slow log parameter settings during Query

Index. search. slowlog. level: TRACE

Index. search. slowlog. threshold. query. warn: 10 s

Index.search.slowlog.threshold.query.info: 5S

Index. search. slowlog. threshold. query. debug: 2 s

Index. search. slowlog. threshold. query. trace: 500 ms

Index. search. slowlog. threshold. fetch. warn: 1 s

Index.search.slowlog.threshold.fetch.info: 800 ms

Index. search. slowlog. threshold. fetch. debug: 500 ms

Index. search. slowlog. threshold. fetch. trace: 200 ms

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.