First, system and required software version introduction
System version: CentOS 6.5 64-bit
Software version: Jdk-8u60-linux-x64.tar.gz, elasticsearch-2.4.2.tar.gz, logstash-2.4.1.tar.gz, kibana-4.6.3-linux-x86_64. tar.gz
Second, install the Java environment
1) Extract the JDK software package.
TAR-ZXVF jdk-8u60-linux-x64.tar.gz
2) on the last side of the/etc/profile file, add the following lines to set the environment variables.
Export Java_home=/data/elk/jdk1.8.0_60export java_bin=/data/elk/jdk1.8.0_60/binexport PATH= $PATH: $JAVA _home/ Binexport classpath=.: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jarexport java_home java_bin PATH CLASSPATH
3) Then load the environment variable to see if the Java environment is installed successfully.
[[Email protected]]# source/etc/profile[[email protected]]# java-versionjava Version "1.8.0_60" Java (TM) SE Runtime Envi Ronment (build 1.8.0_60-b27) Java HotSpot (TM) 64-bit Server VM (build 25.60-b23, Mixed mode)
Third, install the Elasticsearch.
1) from the official website now the corresponding version of the Elasticsearch software package, which is downloaded here is elasticsearch-2.4.2.tar.gz, the download is done after decompression.
wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.2/ ELASTICSEARCH-2.4.2.TAR.GZTAR-ZXVF elasticsearch-2.4.2.tar.gz
2) 2.4 version of the start Elasticsearch need to switch to the normal user, here to use the Elasticsearch has been created to start, you need to elasticsearch directory of the owner and the group to change to Elasticsearch.
Chown-r Elasticsearch.elasticsearch Elasticsearch
3) You need to modify the two lines of elasticsearch/config/elasticsearch.yml this file configuration information, by default is the # annotated, in the Elasticsearch.yml line 55th and 58.
Grep-v "#" elasticsearch/config/elasticsearch.yml| Grep-v "^$" network.host:0.0.0.0http.port:9200
4) Install elasticsearch commonly used two plug-ins, respectively, Bigdesk and head.
Bigdesk: It is a cluster monitoring tool of Elasticsearch, which can be used to view the various states of ES cluster, such as: CPU, Memory usage, index data, search condition, HTTP connection number, etc.
Head:elasticsearch-head is an interface of the cluster operation and management tools, you can do the fool-type operation of the cluster. You can integrate it into ES (preferred mode) via the plugin, or you can install it as a standalone webapp. When installing the plugin, you need to go to the bin directory of Elasticsearch and use a plugin command.
Cd/date/elk/elasticsearch/binlselasticsearch Elasticsearch.in.bat Elasticsearch-service-mgr.exe elasticsearch-s Ervice-x86.exe plugin.bat elasticsearch.bat elasticsearch.in.sh elasticsearch-service-x64.exe plugin SERVICE.BAT./PL Ugin Install Mobz/elasticsearch-head #安装elasticsearch-head plug-in./bin/plugin Install lukas-vlcek/bigdesk/2.4.0 #安装bigd Esk Plug-in
5) Start Elasticsearch.
[[email protected] ~]$ ps -ef | grep elasticsearch playcrab 29513 29473 0 11:05 pts/0 00:00:00 grep elasticsearchplaycrab 30161 1 6 jan09 ? 01:20:03 /data/elk/jdk1.8.0_60/bin/java -xms256m -xmx1g -djava.awt.headless=true -xx:+useparnewgc -xx:+useconcmarksweepgc -xx : cmsinitiatingoccupancyfraction=75 -xx:+usecmsinitiatingoccupancyonly -xx:+ heapdumponoutofmemoryerror -xx:+disableexplicitgc -dfile.encoding=utf-8 -djna.nosys=true - des.path.home=/data/elk/elasticsearch -cp /data/elk/elasticsearch/lib/elasticsearch-2.4.2.jar:/data/ Elk/elasticsearch/lib/* org.elasticsearch.bootstrap.elasticsearch start
6) test whether the normal access, if accessed by the browser, the interface appears similar to the display below, indicating that Elasticsearch started successfully.
Curl 127.0.0.1:9200{"name": "Bast", "cluster_name": "Elasticsearch", "Cluster_uuid": "V-DPTV6PQO-WBIAUWH80SW", "ve Rsion ": {" number ":" 2.4.2 "," Build_hash ":" 161c65a337d4b422ac0c805f284565cf2014bb84 "," Build_timestamp ":" 20 16-11-17t11:51:03z "," Build_snapshot ": false," lucene_version ":" 5.5.2 "}," tagline ":" You Know, for Search "}
Iv. installation of Logstash.
1). Download the logstash-2.4.1.tar.gz from the official website and unzip it.
wget https://download.elastic.co/logstash/logstash/logstash-2.4.1.tar.gz
TAR-ZXVF logstash-2.4.1.tar.gz
2). Configure Logstash file, Nginx type log using udp514 port, PHP type log using udp515 port, transfer to local elasticsearch database.
cd /data/elk/logstashtouch confcd confcat test.confinput { udp { port => 514 type => nginx } udp { port => 515 type => php } }output { if [type] == "Nginx" { elasticsearch { hosts => " localhost:9200 " index => "%{+yyyy. Mm.dd}_nginx_log " } } if [type] == " PHP " { elasticsearch { hosts => " localhost:9200 " index => "%{+yyyy. Mm.dd}_php_error_log " } } stdout {codec => rubydebug} }}
Install Kibana, realize the interface display.
1). Download the kibana-4.6.3-linux-x86_64.tar.gz from the official website and unzip it.
wget https://download.elastic.co/kibana/kibana/kibana-4.6.3-linux-x86_64.tar.gz
TAR-ZXVF kibana-4.6.3-linux-x86_64.tar.gz
2). Modify the Kibana configuration file kibana.yml.
[Email protected] config]# grep-v "#" Kibana.yml|grep-v "^$"
server.port:8000
Server.host: "0.0.0.0"
Elasticsearch.url: "http://127.0.0.1:9200"
3). Because the Kibana process often hangs for no reason, I write a shell script that starts and closes.
touch /etc/init.d/kibanachmod +x /etc/init.d/kibana[[email protected] config]# cat /etc/init.d/kibana #!/bin/bash### begin init info# provides: kibana# default-start: 2 3 4 5# Default-Stop: 0 1 6# short-description: runs kibana daemon# description: runs the kibana daemon as a non-root user### end init info # process Namename=kibanadesc= "Kibana4" prog= "/etc/init.d/kibana" # source function library. /etc/rc.d/init.d/functions # configure location of kibana binkibana_bin=/ Data/home/user00/playcrab/elk/kibana/bin # pid infopid_folder=/var/run/kibana/pid_file=/var/run /kibana/$NAME. pidlock_file=/var/lock/subsys/$NAMEpath=/bin:/usr/bin:/sbin:/usr/sbin: $KIBANA _bindaemon= $KIBANA _bin/$NAME # Configure User To run daemon processdaemon_user=root# configure logging locationkibana_log=/var /log/kibana.log # begin scriptretval=0 if [ ' Id -u ' -ne 0 ]; then echo "You need root privileges to run this script " exit 1fi Start () { echo -n "starting $DESC : " pid= ' pidofproc -p $PID _file kibana ' if [ -n "$pid" ] ; then echo "already running." exit 0 else # Start Daemonif [ ! -d "$PID _folder" ] ; then mkdir $PID _folder fidaemon --user= $DAEMON _user --pidfile= $PID _file $DAEMON 1> "$KIBANA _log" 2>&1 & sleep 2 pidofproc node > $PID _file retval=$? &nBSP;            [[ $? -EQ 0 ]] && success | | failureecho [ $RETVAL = 0 ] && touch $LOCK _file return $RETVAL fi} reload () { echo "Reload command is not implemented for this service. " return $RETVAL} stop () { echo -n "stopping $DESC : " killproc -p $PID _file $DAEMON retval=$?echo   [ $RETVAL = 0 ] && rm -f $PID _file $LOCK _file} case "$" in start) start;; stop) stop ;; status) status -p $PID _file $DAEMON RETVAL=$? ;; restart) stop start ;; reload) reload;; *) # invalid arguments, print the following message. echo "Usage: $0 {start|stop|status|restart}" >& 2exit 2 ;; Esac
The Elk server-side environment has been installed here, but we also need to configure Rsyslog on the log server, because we use Rsyslog to send log server logs, and then elk Receive, after Logstash filtering, Store to Elasticsearch, and finally show it through Kibana.
{Rsyslog}→→{Logstash→→elasticsearch→→kibana}
Configuration of the log server:
[[email protected] ~]# cd /etc/rsyslog.d/[[email protected] rsyslog.d]# cat test.com.conf $ModLoad imfile #im代表输入模块 (input modules) $InputFileName/data/usr/logs/nginx/ test.com.error.log #读取日志文件 $InputFileTag test_nginx_error: # The name defined must be unique, and different applications on the same host should use a different name, otherwise the newly defined tag will not take effect; $InputFileStateFiletest _nginx_error #定义记录偏移量数据文件名, The defined statefile must be unique, it is rsyslog used to record the file upload progress, otherwise it will cause confusion; $InputRunFileMonitor # $InputFileName/data/usr/logs/nginx/teSt.com.access.log$inputfiletag test_nginx_access: $InputFileStateFile test_nginx_access$ inputrunfilemonitor$inputfilepollinterval 10 #等待十秒钟发送一次if $programname == ' Test_nginx_error ' then @10.23.0.24:514 transfers logs to the Elk server, @ means UDP transmission, @@ Represents the use of TCP Transport if $programname == ' test_nginx_error ' then ~if $programname == ' test_nginx_access ' then @10.23.0.24:514if $programname == ' test_nginx_access ' then ~
[email protected] rsyslog.d]# cat php.error.log.conf $ModLoad imfile$inputfilename/data/usr/logs/php-fpm/ Php-fpm.log$inputfiletag Php-fpm_log: $InputFileStateFile state-php-fpm_log$inputrunfilemonitor$ Inputfilepollinterval 10if $programname = = ' Php-fpm_log ' then @10.23.0.24:515if $programname = = ' Php-fpm_log ' then ~
After the configuration is complete, you need to restart Rsyslog.
/etc/init.d/rsyslog restart
In addition to restarting the Rsyslog service, you also need to look at the firewall of the Elk server, and if you use a cloud host with a security group, you need to release UDP 514 and 515 ports, allowing the log server to access the udp514 and 515 ports of Elk Server.
This article from the "Years in the passage, shining still in" blog, declined reprint!
Build Elk Server to display Nginx and PHP logs via Rsyslog