Build a simple elk and log collection application from 0

Source: Internet
Author: User
Tags geoip syslog server memory kibana logstash filebeat egrep
Many blogs have detailed explanations on the elk theory and architecture diagram. This article mainly records the simple setup and Application of elk.

Preparations before installation

1. Environment Description:

IP Host Name Deployment Service
10.0.0.101 (centos7) Test101 JDK, elasticsearch, logstash, kibana, and filebeat (filebeat is used to test and collect the messages logs of the test101 server itself)
10.0.0.102 (centos7) Test102 Nginx and filebeat (filebeat is used to test and collect nginx logs of test102 server)

2. Installation Package preparation:
Jdk-8u151-linux-x64.tar.gz
Elasticsearch-6.4.2.tar.gz
Kibana-6.4.2-linux-x86_64.tar.gz
Logstash-6.4.2.tar.gz
Elk Official Website: https://www.elastic.co/cn/downloads

Deploy elk tool server

Deploy JDK, elasticsearch, logstash, and kibana on the test101 host to deploy the elk server. Upload the preceding four installation packages to the/root directory of the test101 server.

1. Deploy JDK
# Tar xf jdk-8u151-linux-x64.tar.gz-C/usr/local/# echo-e "Export java_home =/usr/local/jdk1.8.0 _ 151 \ n export jre_home =\$ {java_home}/JRE \ N export classpath =. :\$ {java_home}/lib :\$ {jre_home}/Lib \ n export path =\$ {java_home}/bin: \ $ path ">/etc/profile # source/etc/profile # Java-version # Or run the JPS command.

Note: If you accidentally change/etc/profile, you can refer to the blog post: "The/etc/profile file has been changed and all commands cannot be executed. What should I do?"

2. Create a dedicated elk user

Elk users are used to start elasticsearch and configure it in the filebeat configuration file when collecting logs.

# Useradd elk; echo 12345678 | passwd elk -- stdin # create an elk user and set the password to 12345678
3. Deploy elasticsearch

3.1 unzip the installation package:

# tar xf elasticsearch-6.4.2.tar.gz -C /usr/local/

3.2 modify the configuration file/usr/local/elasticsearch-6.4.2/config/elasticsearch. yml as follows:

[[email protected] config]# egrep -v "^#|^$" /usr/local/elasticsearch-6.4.2/config/elasticsearch.ymlcluster.name: elknode.name: node-1path.data: /opt/elk/es_datapath.logs: /opt/elk/es_logsnetwork.host: 10.0.0.101http.port: 9200discovery.zen.ping.unicast.hosts: ["10.0.0.101:9300"]discovery.zen.minimum_master_nodes: 1[[email protected] config]# 

3.3 modify the configuration file/etc/security/limits. conf and/etc/sysctl. conf as follows:

# echo -e "* soft nofile 65536\n* hard nofile 131072\n* soft nproc 2048\n* hard nproc 4096\n" >>/etc/security/limits.conf# echo "vm.max_map_count=655360" >>/etc/sysctl.conf# sysctl -p

3.4 create data and log directories and grant them to Elk users:

# mkdir /opt/elk/{es_data,es_logs} -p# chown elk:elk -R /opt/elk/# chown elk:elk -R /usr/local/elasticsearch-6.4.2/

3.5 start elasticsearch:

# cd /usr/local/elasticsearch-6.4.2/bin/# su elk$ nohup /usr/local/elasticsearch-6.4.2/bin/elasticsearch >/dev/null 2>&1 &

3.6 check processes and ports:

[[email protected] ~]# ss -ntlup| grep -E "9200|9300"tcp    LISTEN     0      128       ::ffff:10.0.0.101:9200                 :::*                   users:(("java",pid=6001,fd=193))tcp    LISTEN     0      128       ::ffff:10.0.0.101:9300                 :::*                   users:(("java",pid=6001,fd=186))[[email protected] ~]# 

Note:
If the elasticsearch service fails, you can check the permissions of the elasticsearch directory and the server memory.

4. Deploy logstash

4.1 unzip the installation package:

# tar xf logstash-6.4.2.tar.gz -C /usr/local/

4.2 modify the configuration file/usr/local/logstash-6.4.2/config/logstash. yml as follows:

[[Email protected] logstash-6.4.2] # egrep-V "^ # | ^ $"/usr/local/logstash-6.4.2/config/logstash. yml path. data:/opt/elk/logstash_data HTTP. HOST: "10.0.0.101" path. logs:/opt/elk/logstash_logs path. config:/usr/local/logstash-6.4.2/CONF. d # This line of configuration file does not have, add yourself to the end of the file just fine [[email protected] logstash-6.4.2] #

4.3 create Conf. D and add the log processing file syslog. conf:

[[Email protected] Conf. d] # mkdir/usr/local/logstash-6.4.2/CONF. d [[email protected] Conf. d] # Cat/usr/local/logstash-6.4.2/CONF. d/syslog. conf input {# filebeat client beats {Port => 5044 }}# filter # filter {} output {# standard output, use stdout {codec => rubydebug {}}# to output to elasticsearch {hosts => ["http: // 10.0.0.101: 9200 "] Index =>" % {type}-% {+ YYYY. mm. dd} "} [[email protected] Conf. d] #

4.4 create data and log directories and grant them to Elk users:

# mkdir /opt/elk/{logstash_data,logstash_logs} -p# chown -R elk:elk /opt/elk/# chown -R elk:elk /usr/local/logstash-6.4.2/

4.5 debug and start the service:

[[Email protected] Conf. d] #/usr/local/logstash-6.4.2/bin/logstash-F/usr/local/logstash-6.4.2/CONF. d/syslog. conf -- config. test_and_exit # This step may take a while to respond to sending logstash logs to/opt/elk/logstash_logs which is now configured via log4j2. properties [t09: 49: 14,299] [info] [logstash. setting. writabledirectory] creating directory {: Setting => "path. queue ",: Path =>"/opt/elk/logstash_data/queue "} [8-8-11-01t09: 49: 14,352] [info] [logstash. setting. writabledirectory] creating directory {: Setting => "path. dead_letter_queue ",: Path =>"/opt/elk/logstash_data/dead_letter_queue "} [8-8-11-01t09: 49: 16,547] [warn] [logstash. config. source. multilocal] Ignoring the 'pipelines. yml 'file because modules or command line options are specifiedconfiguration OK [8-8-11-01t09: 49: 26,510] [info] [logstash. runner] Using config. test_and_exit mode. config Validation result: OK. exiting logstash [[email protected] Conf. d] #

4.6 officially launch the service:

# Nohup/usr/local/logstash-6.4.2/bin/logstash-F/usr/local/logstash-6.4.2/CONF. d/syslog. conf>/dev/null 2> & 1 & # Start the background

4.7 view processes and ports:

[[email protected] local]# ps -ef|grep logstashroot       6325    926 17 10:08 pts/0    00:01:55 /usr/local/jdk1.8.0_151/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.2/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/locallogstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.2/conf.d/syslog.confroot       6430    926  0 10:19 pts/0    00:00:00 grep --color=auto logstash[[email protected] local]# netstat -tlunp|grep 6325tcp6       0      0 :::5044                 :::*                    LISTEN      6325/java           tcp6       0      0 10.0.0.101:9600         :::*                    LISTEN      6325/java           [[email protected] local]# 
5. Deploy kibana

5.1 unzip the installation package:

# tar xf kibana-6.4.2-linux-x86_64.tar.gz  -C /usr/local/

5.2 modify the configuration file/usr/local/kibana-6.4.2-linux-x86_64/config/kibana. yml as follows:

[[email protected] ~]# egrep -v "^#|^$" /usr/local/kibana-6.4.2-linux-x86_64/config/kibana.yml server.port: 5601server.host: "10.0.0.101"elasticsearch.url: "http://10.0.0.101:9200"kibana.index: ".kibana"[[email protected] ~]# 

5.3 modify the kibana directory owner to ELK:

# chown elk:elk -R /usr/local/kibana-6.4.2-linux-x86_64/

5.4 start kibana:

# nohup  /usr/local/kibana-6.4.2-linux-x86_64/bin/kibana >/dev/null 2>&1 &

5.5 view processes and ports:

[[email protected] local]# ps -ef|grep kibanaroot       6381    926 28 10:16 pts/0    00:00:53 /usr/local/kibana-6.4.2-linux-x86_64/bin/../node/bin/node --no-warnings /usr/local/kibana-6.4.2-linux-x86_64/bin/../src/cliroot       6432    926  0 10:19 pts/0    00:00:00 grep --color=auto kibana[[email protected] local]# netstat -tlunp|grep 6381tcp        0      0 10.0.0.101:5601         0.0.0.0:*               LISTEN      6381/node           [[email protected] local]# 

5.6 http: // 10.0.0.101: 5601 access the kibana interface:

So far, the server of the entire elk tool has been set up.

Elk log Collection Application

After the server is deployed, log collection is configured, and filebeat is used at this time.

Application 1: Collect the messages log and secure log of elk Local Machine (test101)

1. On the kibana homepage, click "add Log Data ":

2. Select system log:

3. Select rpm. Here are the steps for adding logs (but there is a pitfall in the steps. You can refer to the following configuration steps :):
3.1 install the plug-in under es on the test101 Server:

# cd /usr/local/elasticsearch-6.4.2/bin/# ./elasticsearch-plugin install ingest-geoip

3.2 download and install filebeat on test101 Server:

# curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-x86_64.rpm# rpm -vi filebeat-6.4.2-x86_64.rpm

3.3 configure filebeat on the test101 server and modify the following items in/etc/filebeat. yml:

#======================== Filebeat inputs ============ ============================= filebeat. inputs:-type: log # change to true to enable this input configuration. enabled: true # Note: The default value here is false. modification is not mentioned on the kibana interface, but not true. The kibana interface will not see the log Content paths: # configure the log to be collected, here I have collected the messages log and secure log-/var/log/messages *-/var/log/secure * #============== ====================== kibana ============================= ================= setup. kibana: Host: "10.0.0.101: 5601" # -------------------------- elasticsearch output ------------------------------ output. elasticsearch: hosts: ["10.0.0.101: 9200"] Username: "elk" Password: "12345678"

3.4 run the following command on the test101 server to modify/etc/filebeat/modules. d/system. yml:

# filebeat modules enable system

3.5 start filebeat on the test101 Server:

# filebeat setup# service filebeat start

3.6 then return to kibana's discover interface and search for the keywords messages and secure to see the relevant logs:

Application 2: Collect nginx logs of the 10.0.0.102 (test102) Server

In application 1, we collected logs from Elk servers. Now we can collect the logs from test102.

1. Install nginx on nginx:

# yum -y install nginx

2. Like Application 1, on the kibana homepage, click "add Log Data", select nginx logs, and find the installation procedure:

3. Select rpm. Here are the steps for adding logs (refer to the following configuration steps :):
3.1 install the plug-in under es on the test101 Server:

# Cd/usr/local/elasticsearch-6.4.2/bin /#. /elasticsearch-plugin install ingest-geoip # This has been installed in Application 1 and can be omitted #. /elasticsearch-plugin install ingest-User-Agent

======= The following are all performed on the 10.0.0.102 (test102) server ========
3.2 download and install filebeat on test102 Server:

# curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-x86_64.rpm# rpm -vi filebeat-6.4.2-x86_64.rpm

3.3 configure filebeat on the test102 server and modify the following items in/etc/filebeat. yml:

#======================== Filebeat inputs ============ ============================= filebeat. inputs:-type: log # change to true to enable this input configuration. enabled: true # Note: The default value here is false. modification is not mentioned on the kibana interface, but not true. The kibana interface will not see the log Content paths: # configure the log to be collected, I have collected all the log files in the/var/log/nginx/directory, including access. log and error. log, *-/var/log/nginx/* #========================== ===== kibana =========================================== = setup. kibana: Host: "10.0.0.101: 5601" # -------------------------- elasticsearch output ------------------------------ output. elasticsearch: hosts: ["10.0.0.101: 9200"] Username: "elk" Password: "12345678"

3.4 run the following command on the test102 server to modify/etc/filebeat/modules. d/nginx. yml:

# filebeat modules enable nginx

After the execution, the file is written into the following content:

[[email protected] ~]# cat /etc/filebeat/modules.d/nginx.yml- module: nginx  # Access logs  access:    enabled: true    # Set custom paths for the log files. If left empty,    # Filebeat will choose the paths depending on your OS.    #var.paths:  # Error logs  error:    enabled: true    # Set custom paths for the log files. If left empty,    # Filebeat will choose the paths depending on your OS.    #var.paths:[[email protected] ~]# 

3.5 start filebeat on the test102 Server:

# filebeat setup# service filebeat start

3.6 then return to kibana's discover interface to view the relevant logs:

Note:
Some articles have installed the elasticsearch-head plug-in.

Build a simple elk and log collection application from 0

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.