Unified Log Retrieval Deployment (es, Logstash, Kafka, Flume)

Source: Internet
Author: User
Tags logstash

Flume: Used to collect logs and transfer logs to KAKFA

Kafka: As a cache, store logs from Flume

ES: As a storage medium, store logs

Logstash: True filtering of logs

Flume deployment

Get the installation package, unzip

1 wget http://10.80.7.177/install_package/apache-flume-1.7.0-bin.tar.gz  && tar ZXF apache-flume-1.7.0-bin.tar.gz-c/usr/local/

Modify the flumen-env.sh script to set the startup parameters

1 cd/usr/local/apache-flume-1.7. 0-2 vim conf/flume-env. SH
1 export JAVA_HOME=/USR/JAVA/JDK1. 8. 0_121/2 export java_opts="-xms1000m-xmx2000m-dcom.sun.management.jmxremote  "//Set the memory size used by the startup JVM

Editing a configuration file

1 Vim conf/flume_kfk.conf    (Note: The configuration file name is random)
1Agent.sources =S12Agent.channels =C13Agent.sinks =K14 5Agent.sources.s1.type=exec6 7 #要搜集的日志文件据对路径8Agent.sources.s1.command=Tail-f/root/Test.log9agent.sources.s1.channels=C1TenAgent.channels.c1.type=Memory Oneagent.channels.c1.capacity=10000 Aagent.channels.c1.transactioncapacity= - - #设置Kafka接收器 -Agent.sinks.k1.type=Org.apache.flume.sink.kafka.KafkaSink the #设置Kafka的broker地址和端口号 -agent.sinks.k1.brokerlist=10.90.11.19:19092,10.90.11.32:19092,10.90.11.45:19092,10.90.11.47:19092,10.90.11.48:19092 - #设置Kafka的Topic -Agent.sinks.k1.topic=kafkatest + #设置序列化方式 -agent.sinks.k1.serializer.class=Kafka.serializer.StringEncoder +Agent.sinks.k1.channel=c1

Create a Kafka topic

1 cd/data1/kafka/kafka_2. One-0.10. 1.0/&&./kafka-topics. SH 3  - 10.90. 11.19:12181

Start flume

1 /usr/local/apache-flume-1.7. 0-bin/bin/flume-ng agent-n agent-dflume.monitoring.type=http-dflume.monitoring.port=9876 -C conf -f/usr/local/apache-flume-1.7. 0-bin/conf/flume_launcherclick.conf-dflume.root.logger=error,console- Dorg.apache.flume.log.printconfig=True

Test

1 Test: Append the log to the/root/test.log file. Log on to KAKFA Manager to see if there is a message in the Kafkatest topic. If there is, the instructions are no problem, if not, please check.
Deploying Supervisor Monitoring Flume

Supervisor deployment again do not repeat, see: https://www.cnblogs.com/sailq21/p/9227592.html

Edit/etc/supervisord.conf

[Unix_http_server]file=/data/ifengsite/flume/supervisor.sock; The path to the socketfile[Inet_http_server]; inet (TCP) server disabled by Defaultport=9001; Ip_address:port specifier, *:p ort forAll iface[supervisord]logfile=/data/logs/supervisord.log; Main logfile; Default $CWD/supervisord.loglogfile_maxbytes=50MB; Max main logfile bytes b4 rotation; default 50mblogfile_backups=Ten; # of main logfile backups;0means none, defaultTenloglevel=Info; Log level; DefaultInfo; Others:debug,warn,tracepidfile=/tmp/supervisord.pid; Supervisord pidfile; default Supervisord.pidnodaemon=false; StartinchForegroundif true; DefaultfalseMinfds=1024x768; Min. avail Startupfiledescriptors; Default1024x768Minprocs= $; Min. avail process Descriptors;default $[Rpcinterface:supervisor]supervisor.rpcinterface_factory=Supervisor.rpcinterface:make_main_rpcinterface[supervisorctl]serverurl=unix:///Data/ifengsite/flume/supervisor.sock; Use a unix://URL for a UNIX socket[Include]files=/etc/supervisord.d/*. conf

Edit Flume Startup file

1[program:flume-Push]2Directory =/usr/local/apache-flume-1.7.0-bin/3Command =/usr/local/apache-flume-1.7.0-bin/bin/flume-ng Agent-n agent-dflume.monitoring.type=http-dflume.monitoring.port=9876-C Conf-f/usr/local/apache-flume-1.7.0-bin/conf/push.conf-dflume.root.logger=error,console-dorg.apache.flume.log.printconfig=true4Autostart =true5Startsecs =56AutoRestart =true7Startretries =38user =Root9Redirect_stderr =trueTenStdout_logfile_maxbytes =20MB OneStdout_logfile_backups = - AStdout_logfile =/data/ifengsite/flume/logs/flume-supervisor.log

Create a directory, and start supervisor

1 mkdir -p/data/ifengsite/flume/logs/2 supervisord-c/etc/supervisord.conf3 Restart Supervisor:supervisorctl Reload

Test: Login ip:9001 View Supervisor

If Flume to Kafka is not a problem, then configure Logstash

Edit flume_kfk.conf

1 vim/etc/logstash/conf.d/flume_kfk.conf
1 input{2 Kafka {3Bootstrap_servers = ["10.90.11.19:19092,10.90.11.32:19092,10.90.11.45:19092,10.90.11.47:19092,10.90.11.48:19092"]4client_id ="Test"5group_id ="Test"6Consumer_threads =57Decorate_events =true8Topics ="kafkatest"9Type ="TESTQH"Ten     } One } A Filter { - Mutate { -Gsub = ["message","\\x","\\\x"] the     } - JSON { -Source ="message" -Remove_field = ["message","beat","Tags","Source","Kafka"] +     } -     Date { +Match = ["timestamp","ISO8601"] ATimeZone ="Asia/shanghai" attarget ="@timestamp" -     } -  - } -  - #输出到标准输出用于debug, when output to ES is required, it can be configured as ES receive.  in output{ - stdout{ tocodec =Rubydebug +     } -}

Unified Log Retrieval Deployment (es, Logstash, Kafka, Flume)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.