Elk Log Server for fast setup and collection of Nginx logs

Source: Internet
Author: User
Tags kibana logstash

Today is open source real-time log analysis ELK, ELK by ElasticSearch, Logstash and Kiabana three open source tools. Official website: https://www.elastic.co

3 of these software are:

Elasticsearch is an open source distributed search engine, it features: distributed, 0 configuration, automatic discovery, Index auto-shard, index copy mechanism, RESTful style interface, multi-data source, automatic search load, etc.


Logstash is a fully open source tool that collects, analyzes, and stores your logs for later use (for example, search).


Kibana is also an open source and free tool that Kibana can provide for Logstash and ElasticSearch log Analytics friendly Web interface that can help you summarize, analyze, and search for important data logs

System Software that the system needs to install Ip Describe
centos6.4 Elasticsearch/test5 192.168.48.133 Search for storage logs
centos6.4 Elasticsearch/test4 192.168.48.131 Search for storage logs
centos6.4 Logstash/nginx/test1 192.168.48.129 Used to collect logs to the above
centos6.4 Kibana,nginx/test2 192.168.48.130 For the back end of the show

Schematic diagram of the architecture:

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M02/83/A2/wKiom1d5AXqAuTzcAAEsUfMlZRg070.png "title=" Qq20160703200536.png "alt=" Wkiom1d5axqautzcaaesufmlzrg070.png "/>

Installing the elasticsearch-2.3.3.rpm on the Test5,test4 install the java1.8 steps as follows:

Yum Remove java-1.7.0-openjdk rpm-ivh jdk-8u91-linux-x64.rpm yum localinstall elasticsearch-2.3.3 . rpm

650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/83/A2/wKiom1d5AreQvRm2AABF-N1KQ7Y359.png-wh_500x0-wm_3 -wmp_4-s_2040892393.png "style=" Float:none; "title=" 1.png "alt=" Wkiom1d5areqvrm2aabf-n1kq7y359.png-wh_50 "/>


650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M01/83/A1/wKioL1d5ArfjNsS5AABiC1OTYMI755.png-wh_500x0-wm_3 -wmp_4-s_72980646.png "style=" Float:none; "title=" 2.png "alt=" Wkiol1d5arfjnss5aabic1otymi755.png-wh_50 "/>


Configure Elasticsearch under directory/etc/elasticsearch directory lasticsearch.yml elasticsearch.yml.bak logging.yml scripts

Edit Lasticsearch.yml

Modify the following configuration

Cluster.name:myelk #设置集群的名称, in a cluster is the name, must be the same


Node.name:test5 #设置每一个节点的名, each node must have a different name.


Path.data:/path/to/data #指定数据的存放位置, the line of the machine this should be placed in a single large partition inside.


Path.logs:/path/to/logs #日志的目录


Bootstrap.mlockall:true #启动最优内存配置, the start is allocated enough memory, performance will be much better, test I will not boot.


network.host:0.0.0.0 #监听的ip地址, this represents all the addresses.


http.port:9200 #监听的端口号


Discovery.zen.ping.unicast.hosts: ["192.168.48.133", "192.168.48.131"] #知道集群的ip有那些, no cluster will appear on a work

650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M00/83/A1/wKioL1d5A1KAG-10AACEKG3IMQY803.png-wh_500x0-wm_3 -wmp_4-s_3436940433.png "title=" 3.png "alt=" Wkiol1d5a1kag-10aacekg3imqy803.png-wh_50 "/>

Create a Directory

Mkdir-pv/pach/to/{data,logs} chown Elasticsearch.elasticsearch/path-r

Start Server service Elasticsearch start and view monitoring port start

650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M00/83/A2/wKiom1d5A6KivnrWAAA-imePEc8427.png-wh_500x0-wm_3 -wmp_4-s_3698963992.png "title=" 4.png "alt=" Wkiom1d5a6kivnrwaaa-imepec8427.png-wh_50 "/>

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M01/83/A2/wKiom1d5BF3wvR0IAAA0TZwANOQ283.png-wh_500x0-wm_3 -wmp_4-s_1224506225.png "style=" Float:none; "title=" 5.png "alt=" Wkiom1d5bf3wvr0iaaa0tzwanoq283.png-wh_50 "/>

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/83/A1/wKioL1d5BF6CvmahAADLKcncZYw488.png-wh_500x0-wm_3 -wmp_4-s_3139985470.png "style=" Float:none; "title=" 6.png "alt=" Wkiol1d5bf6cvmahaadlkcnczyw488.png-wh_50 "/>

Access 9200 Port View service

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/83/A1/wKioL1d5BF6xjSytAAE3NoxMurI940.png-wh_500x0-wm_3 -wmp_4-s_1355590151.png "style=" Float:none; "title=" 7.png "alt=" Wkiol1d5bf6xjsytaae3noxmuri940.png-wh_50 "/>

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/83/A2/wKiom1d5BF-z2kxHAAE1is1xaFU411.png-wh_500x0-wm_3 -wmp_4-s_216249307.png "style=" Float:none; "title=" 8.png "alt=" Wkiom1d5bf-z2kxhaae1is1xafu411.png-wh_50 "/>


Both configurations are the same as the above IP and note name to be configured differently on the line

Access Ip:9200/_plugin/head and Ip:9200/_plugin/kopf after installing plug-ins head and Kopf

/usr/share/elasticsearch/bin/plugin Install Lmenezes/elasticsearch-kopf/usr/share/elasticsearch/bin/plugin Install Mobz/elasticsearch-head

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/83/A2/wKiom1d5BLyhs7XeAACCGXzXAZc574.png-wh_500x0-wm_3 -wmp_4-s_4072834153.png "style=" Float:none; "title=" 9.png "alt=" Wkiom1d5blyhs7xeaaccgxzxazc574.png-wh_50 "/>

650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/83/A1/wKioL1d5BLyg4s26AAFrmeN1SbA352.png-wh_500x0-wm_3 -wmp_4-s_2488121795.png "style=" Float:none; "title=" 10.png "alt=" Wkiol1d5blyg4s26aafrmen1sba352.png-wh_50 "/>

650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/83/A2/wKiom1d5BL3R6qRlAAFp6dknXNY058.png-wh_500x0-wm_3 -wmp_4-s_742934526.png "style=" Float:none; "title=" 11.png "alt=" Wkiom1d5bl3r6qrlaafp6dknxny058.png-wh_50 "/>


Installing the Nginx service on the test1 is to collect its logs.

Yum-y install zlib zlib-devel OpenSSL openssl--devel pcre pcre-devel./configure--prefix=/usr/local/nginx--with-pcre --with-openssl=--with-zlib=make && make install

Log in/usr/local/nginx/logs/access.log

Then install the logstash-2.3.3-1.noarch.rpm on the Test1

Yum Remove Java-1.7.0-openjdkrpm-ivh JDK-8U91-LINUX-X64.RPMRPM-IVH logstash-2.3.3-1.noarch.rpm/etc/init.d/logstash Start #启动服务/opt/logstash/bin/logstash-e "input {stdin{}} output{stdout{codec=>" Rubydebug "}}" #检测环境 perform this command to detect if the environment is normal, The input will appear when the boot is complete.

650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M01/83/A1/wKioL1d5BajS6S5tAABneX_m-M0291.png-wh_500x0-wm_3 -wmp_4-s_1906415850.png "title=" 12.png "alt=" Wkiol1d5bajs6s5taabnex_m-m0291.png-wh_50 "/>


Then enter/opt/logstash/bin/logstash-e ' input {stdin{}} output{elasticsearch {hosts = ["192.168.48.131:9200"] Index => ; "Test"}} '

is to input things to the 48.131 Elasticsearch will be generated in/path/to/data/myelk/nodes/0/indices your name Test index file directory can be more than a few to 48.131 of the directory to see if there are documents to prove normal.

650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M00/83/A1/wKioL1d5BjPwjLQRAAAddl9KTEQ287.png-wh_500x0-wm_3 -wmp_4-s_1469172442.png "style=" Float:none; "title=" 13.png "alt=" Wkiol1d5bjpwjlqraaaddl9kteq287.png-wh_50 "/>

650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M00/83/A2/wKiom1d5BjORMNW-AAAVxKe_0EM466.png-wh_500x0-wm_3 -wmp_4-s_3226583800.png "style=" Float:none; "title=" 14.png "alt=" Wkiom1d5bjormnw-aaavxke_0em466.png-wh_50 "/>

650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M01/83/A1/wKioL1d5BjSgTZ2VAAAV0iM9yI4897.png-wh_500x0-wm_3 -wmp_4-s_4218721444.png "style=" Float:none; "title=" 15.png "alt=" Wkiol1d5bjsgtz2vaaav0im9yi4897.png-wh_50 "/>

After/ETC/LOGSTASH/CONF.D established in the. conf end of the configuration file, I collected nginx called nginx.conf the content as follows;

###########################################################################################

Input {

File {

Type = "Accesslog"

Path = "/usr/local/nginx/logs/access.log" #日志的位置

Start_position = "Beginning" #日志收集文件, default end

}

}


Output {

if [type] = = "Accesslog" {

Elasticsearch {

hosts = ["192.168.0.87"] # # #elasticearch的地址

index = "nginx-access-%{+yyyy. MM.DD} "#生成的索引和刚才的test一样会在那里生成后面的是日期变量.

}

}

}

##########################################################################################

Be careful, then run/etc/init.d/logstash configtest to detect if the configuration is normal.

650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M01/83/A1/wKioL1d5CDHzIFiBAAA-bzvYR80064.png-wh_500x0-wm_3 -wmp_4-s_1224547396.png "style=" Float:none; "title=" 16.png "alt=" Wkiol1d5cdhzifibaaa-bzvyr80064.png-wh_50 "/>

650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M02/83/A2/wKiom1d5CDHySQSLAAAN5qgS2IU859.png-wh_500x0-wm_3 -wmp_4-s_2398168903.png "style=" Float:none; "title=" 17.png "alt=" Wkiom1d5cdhysqslaaan5qgs2iu859.png-wh_50 "/>

To see if a process starts

650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M01/83/A2/wKiom1d5CDGhHjOBAABabA53lQQ857.png-wh_500x0-wm_3 -wmp_4-s_2546271489.png "style=" Float:none; "title=" 18.png "alt=" Wkiom1d5cdghhjobaababa53lqq857.png-wh_50 "/>

After the Elasticearch view there is no index build. More access to the Nginx service

If not, modify the file.

Vi/etc/init.d/logstash

######################################################################################################

Ls_user=root # # #把这里换成root或者把访问的日志加个权限可以让logstash可以读取它 Restart the service will generate an index

Ls_group=root

Ls_home=/var/lib/logstash

Ls_heap_size= "1g"

Ls_log_dir=/var/log/logstash

Ls_log_file= "${ls_log_dir}/$name. LOG"

Ls_conf_dir=/etc/logstash/conf.d

ls_open_files=16384

Ls_nice=19

KILL_ON_STOP_TIMEOUT=${KILL_ON_STOP_TIMEOUT-0} #default value is zero-to-variable but could was updated by user reques T

Ls_opts= ""

#######################################################################################################

650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M00/83/A1/wKioL1d5CQigfEsqAAA7WwH9gJ4848.png-wh_500x0-wm_3 -wmp_4-s_2881384134.png "style=" Float:none; "title=" 19.png "alt=" Wkiol1d5cqigfesqaaa7wwh9gj4848.png-wh_50 "/>

Look at Logstash's log, and the following information will be successful.

650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M02/83/A1/wKioL1d5CQjhZoJqAAAMRQWxsjk426.png-wh_500x0-wm_3 -wmp_4-s_678422780.png "style=" Float:none; "title=" 20.png "alt=" Wkiol1d5cqjhzojqaaamrqwxsjk426.png-wh_50 "/>

The above are installed on the test2 above the installation Kibana

RPM-IVH kibana-4.5.1-1.x86_64.rpm

Edit the configuration file/opt/kibana/config/kibana.yml here, just modify the following items.

#######################################################################################################

server.port:5601 Port

Server.host: "0.0.0.0" monitor

Elasticsearch.url: "http://192.168.48.131:9200" Elasticsearch address

######################################################################################################


650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M00/83/A1/wKioL1d5CqvB80nIAAClAFJ33mQ577.png-wh_500x0-wm_3 -wmp_4-s_523075493.png "title=" 21.png "alt=" Wkiol1d5cqvb80niaaclafj33mq577.png-wh_50 "/>

/etc/init.d/kibana Start Service

Visit Kibana http://ip:5601

650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M00/83/A3/wKiom1d5CtjDPlmsAAClAFJ33mQ778.png-wh_500x0-wm_3 -wmp_4-s_1420567326.png "style=" Float:none; "title=" 21.png "alt=" Wkiom1d5ctjdplmsaaclafj33mq778.png-wh_50 "/>


Adding an index to the display is the nginx-access-2016.07.03 defined above

650) this.width=650; "Src=" Http://s3.51cto.com/wyfs02/M01/83/A1/wKioL1d5CtjgxjrgAAGklcyZE0A421.png-wh_500x0-wm_3 -wmp_4-s_1793761656.png "style=" Float:none; "title=" 22.png "alt=" Wkiol1d5ctjgxjrgaagklcyze0a421.png-wh_50 "/>

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M01/83/A3/wKiom1d5CtmRVDaQAACpUUHChNY947.png-wh_500x0-wm_3 -wmp_4-s_4146646070.png "style=" Float:none; "title=" 23.png "alt=" Wkiom1d5ctmrvdaqaacpuuhchny947.png-wh_50 "/>

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/83/A1/wKioL1d5CtmQo1A4AAF25JZ5zhQ862.png-wh_500x0-wm_3 -wmp_4-s_4277236049.png "style=" Float:none; "title=" 24.png "alt=" Wkiol1d5ctmqo1a4aaf25jz5zhq862.png-wh_50 "/>


Kibana is less secure than direct access, we need to use Nginx to access the proxy and set permissions for user name and password access


Install Nginx on the Kibana server first does not introduce

Configuration in Nginx

#################################################################################

Server

{

Listen 80;

server_name localhost;

auth_basic "Restricted Access";

Auth_basic_user_file/usr/local/nginx/conf/htpasswd.users; #密码和用户


Location/{

Proxy_pass http://localhost:5601; #代理kibana的5601之后就可以直接80访问了

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header remote-host $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

}

}

####################################################################################

Create password and user files: htpasswd.users

Need to install Httpd-tool package install it first

Htpasswd-bc/usr/local/nginx/conf/htpasswd.users Admin Paswdadmin #前面是用户后面是密码


##################################################################################


650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M01/83/A3/wKiom1d5DHbwMWcBAAAUEbMe2rI094.png-wh_500x0-wm_3 -wmp_4-s_821362758.png "title=" 26.png "alt=" Wkiom1d5dhbwmwcbaaauebme2ri094.png-wh_50 "/>


After that access requires a password and a user and is port 80


650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M01/83/A3/wKiom1d5DJfyqTP2AAGNj-enHvA081.png-wh_500x0-wm_3 -wmp_4-s_1584390873.png "title=" 25.png "alt=" Wkiom1d5djfyqtp2aagnj-enhva081.png-wh_50 "/>



It's done here, thank you for reading.


Elk Log Server for fast setup and collection of Nginx logs

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.