Linux Build Elk Log collection system: FILEBEAT+REDIS+LOGSTASH+ELASTICSE

Source: Internet
Author: User
Tags create index stdin kibana docker run logstash filebeat

Centos7 Deploying Elk Log Collection System

First, elk Overview:

Elk is a short list of open source software, including Elasticsearch, Logstash, and Kibana. Elk has developed rapidly in recent years and has become the most popular centralized logging solution.

    • Elasticsearch: Enables close real-time storage, search and analysis of large volumes of data. In this project, all the obtained logs are stored primarily through elasticsearch.

    • Logstash: Data collection engine, which supports the dynamic acquisition of data from various data sources, and data filtering, analysis, rich, unified format and other operations, and then stored in the user-specified location.

    • Kibana: Data analysis and visualization platform, visual analysis of Elasticsearch stored data, presented in tabular form.

    • Filebeat: Lightweight, open source log file Data collector. Usually when the client that needs to collect data installs Filebeat, and specifies directory and log format, Filebeat can collect data quickly, send to logstash for parsing, or send directly to elasticsearch storage.

    • Redis:nosql Database (Key-value), also data lightweight Message Queuing, can not only truncate high concurrency logs but also decouple the entire architecture

Classic framework of traditional elk

单一的架构,logstash作为日志搜集器,从数据源采集数据,并对数据进行过滤,格式化处理,然后交由Elasticsearch存储,kibana对日志进行可视化处理。

New Elk Frame

Filebeats是一种轻量级的日志搜集器,其不占用系统资源,自出现之后,迅速更新了原有的elk架构。Filebeats将收集到的数据发送给Logstash解析过滤,在Filebeats与Logstash传输数据的过程中,为了安全性,可以通过ssl认证来加强安全性。之后将其发送到Elasticsearch存储,并由kibana可视化分析。

Second, the new Elk construction detailed process

Lab Environment:

Host IP Deployment Program
1 192.168.3.206 Filebeat
2 192.168.3.205 Redis,logstash,elasticsearch,kibana

Here are the installation packages required for the setup process:
Https://pan.baidu.com/s/1w02WtUAqh9yX4TChyMLa5Q Password: G0P9

    1. Client Deployment Filebeat:

      yum -y install filebeat
      #查看配置文件所在位置rpm -qc filebeat
    2. Modify the configuration file so that filebeat gets the log into Redis:
      Note: To get the Eureka log in the Spring cloud framework here, other program logs can be obtained in the same way
vim /etc/filebeat/filebeat.yml
#修改的内容有一家几个字段enabled:truepaths:程序日志路径output.redis:日志输出地方                    hosts:redis所在服务器IP                    port:redis端口                    key:redis中的key

    1. SOURCE Installation Redis:

Unzip the Redis package:

tar zxf redis-3.2.9.tar.gz –C /usr/local/src

To compile Redis:

cd /usr/local/src/redis-3.2.9make && make installln –s /usr/local/src/redis-3.2.9 /usr/local/redis

Note: Redis installation of some lack of language environment will be wrong, and some will appear strange problems, as long as the copy error to the search under the above can be easily resolved, not much to explain

To modify a Redis configuration file:

vim /usr/local/redis/redis.conf#修改内容如下:daemonize yes                           #开启后台运行timeout 120                                #超时时间bind 0.0.0.0                                #任何地址IP都可以登录redisprotected-mode no                     #关闭redis保护机制否则在没有密码校验情况下redis远程登录失败

Note: Here is a demo, if it is online deployment Elk recommends that the persistence mechanism be enabled to ensure that data is not lost

    1. Log in to test whether Redis can write data normally:

    2. Start Filebeat to see if Redis can receive data:

Start Filebeat:

systemctl start filebeat
    1. Go to Redis to see if there is data:

      #执行命令:keys *                          #查看所有key,此操作为慢查询,若redis跑了大量线上业务请不要进行此操做lrange eureka-log 0 -1 #查询key所有数据,若filebeat启动时间过长请勿进行此操作

    2. Install jdk1.8:

Unzip the JDK installation package and create a soft connection:

tar zxf /usr/local/src/jdk-8u131-linux-x64.tar.gz –C /usr/local/ln -s /usr/local/jdk1.8.0_91/ /usr/local/jdk

To configure environment variables:

vim /etc/profile#修改内容如下:JAVA_HOME=/usr/local/jdkexport JRE_HOME=/usr/local/jdk/jreexport CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATHexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

Reload Environment variables:

source /etc/profile

To see if the JDK was installed successfully:

java -version

    1. Install Elasticsearch:

Unzip the installation package and rename it:

unzip elasticsearch-5.6.3.zip -d /usr/local/mv /usr/local/ elasticsearch-5.6.3 /usr/local/elasticsearh

To modify the ES configuration file:

vim /usr/local/elasticsearch/config/elasticsearch.yml#这里指定的是集群名称,需要修改为对应的,开启了自发现功能后,ES会按照此集群名称进行集群发现cluster.name: my-applicationnode.name: node-1#目录需要手动创建path.data: /opt/elk/datapath.logs: /opt/elk/logs#ES监听地址任意IP都可访问network.host: 0.0.0.0http.port: 9200#若是集群,可在里面引号中添加,逗号隔开discovery.zen.ping.unicast.hosts: [“192.168.3.205”]# enable cors,保证_site类的插件可以访问es    http.cors.enabled: true             #手动添加http.cors.allow-origin: “*”         #手动添加# Centos6不支持SecComp,而ES5.2.0默认bootstrap.system_call_filter为true进行检测,所以导致检测失败,失败后直接导致ES不能启动bootstrap.memory_lock: false        #手动添加bootstrap.system_call_filter: false     #手动添加

Note: ES start-up time to occupy a very large resource so need to modify the system parameters, if not modify the resource start will be abnormal exit

    1. To modify system parameters:
vim /etc/sysctl.conf#添加参数vm.max_map_count=655360

Reload configuration:

sysctl –p
    1. To modify resource parameters:
vim /etc/security/limits.conf#修改*   soft    nofile  65536*   hard        nofile  131072  *   soft        nproc   65536*   hard        nproc   

Such as:

    1. Set User Resource parameters:
vim /etc/security/limits.d/20-nproc.conf#添加elk     soft    nproc       65536
    1. Create a user and empower:

      useradd elkgroupadd elkuseradd elk -g elk
    2. Create data and log directories and modify directory permissions:
mkdir –pv /opt/elk/{data,logs}chown –R elk:elk /opt/elkchown –R elk:elk /usr/local/elasticsearch
    1. Switch user and background start es: (Elk user Modified resource parameters, such as non-tangent bit Elk user boot will be dead)
su elknohup /opt/app/elasticsearch-5.6.3/bin/elasticsearch >> /dev/null 2>&1 &
    1. View ES Status:
      Method One,
      Curl ' Http://[es ip]:9200/_search?pretty '

Method Two,
#网页访问:
Http://[es Ip]:9200/_search?pretty

    1. Install Logstash:

Unzip and create a soft connection:

tar /usr/local/src/logstash-5.3.1.tar.gz –C /usr/local/ln –s /usr/local/logstash-5.3.1 /usr/local/logstash

To test whether the Logstash is available:

/usr/local/logstash/bin/logstash -e ‘input { stdin { } } output { stdout {} }‘

Create the main provisioning file here for testing:

vim /usr/local/logstash/config/logstash-simple.conf#内容如下:input { stdin { } }output {    stdout { codec=> rubydebug }}

Use the Logstash parameter-F to read the configuration file for testing:

/usr/local/logstash/bin/logstash -f /usr/local/logstash/config/logstash-simple.conf

At this point our logstash is completely no problem, can be log collection

    1. To create a profile to obtain data for the Redis log:

The configuration file is as follows:

vim /usr/local/logstash/config/redis-spring.conf input {  redis {    port => "6379"    host => "192.168.3.205"    data_type => "list"    type => "log"    key => "eureka-log"  }}output {  elasticsearch {     hosts => "192.168.3.205:9200"     index => "logstash1-%{+YYYY.MM.dd}"  }}

Start the service through the profile to see the effect:

/usr/local/logstash/bin/logstash -f /usr/local/logstash/config/redis-spring.conf

The results are as follows:

At this point we go to the Reids key: (There is no data at this time, the data has been logstash taken out)

    1. Use Curl to see if ES accepts data
curl http://192.168.3.205:9200/_search?pretty

The results are as follows:

At this point we logstash data from Redis, it is OK to push the data into ES!

    1. Install ES plugin: (elasticsearch-head)

Note: Head installation needs to pull something from a foreign site, may be slow speed caused the installation failure (can try several times), there are several ways to install:

方法一、导入node-v8.2.1.tar.gz phantomjs-2.1.1-linux-x86_64.tar.bz2 安装包安装node:tar zxvf node-v8.2.1.tar.gzcd node-v8.2.1/./configure && make && make install 安装phantomjs:tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2cd phantomjs-2.1.1-linux-x86_64/bin/cp phantomjs /usr/local/bin/导入es-head程序包并解压:unzip master.zip –d /usr/local/cd elasticsearch-head/npm installnpm run start &查看端口状态:(端口默认9100)netstat –anpt | grep 9100方法二、git clone git://github.com/mobz/elasticsearch-head.gitcd elasticsearch-headnpm installnpm run startnetstat –anpt | grep 9100方法三、拉镜像:docker push mobz/elasticsearch-head:5启动镜像:docker run -p 9100:9100 mobz/elasticsearch-head:5web访问测试:http://IP:9100
    1. Elasticsearch-head successful Installation Web Access results are as follows:

To view data that has just been pushed from Logstash to es:

    1. Installing Kibana

Unzip and install the Kibana:

tar -zxvf /usr/local/src/kibana-5.3.1-linux-x86_64.tar.gz -C /usr/local/

To modify the Kibana configuration file:

vim /usr/local/kibana-5.3.1-linux-x86_64/config/kibana.yml修改内容如下:server.port: 5601                                                            #开启默认端口5601server.host: “192.168.3.205”                                    #kibana站点IPelasticsearch.url: http://192.168.3.205:9200        #只想ES服务所在IP Portkibana.index: “.kibana”

Background boot Kibana:

nohup /usr/local/kibana-5.3.1-linux-x86_64/bin/kibana >> /dev/null 2>&1 &

To view port monitoring:

netstat –anot | grep 5601

Results such as: (this result indicates that Kibana started successfully)

Using Web Access Kibana:

http://[Kibana IP]:5601

Initial access results such as: (no index created at the time of the visit so no data is visible)

Index is set according to the Logstash configuration file:
First look at index in Logstash:

Create index in Kibana:

The following are set in 1,2,3,4 order:

At this point we return to discover in which we can see the data:

Now our elk is installed OK!!!!!!!!!!!

Linux Build Elk Log collection system: FILEBEAT+REDIS+LOGSTASH+ELASTICSE

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.