In mission 800 operation and Maintenance summary of Haproxy---rsyslog----Kafka---Collector--es--kibana

Source: Internet
Author: User
Tags kibana logstash haproxy rsyslog

This is my entire process of log analysis for haproxy in the unit.


We have been in the maintenance ES cluster configuration, and did not put a set of processes including the collection end of the code, all their own once, and the online collection of logs when we generally use the logstash, but the industry many people say logstash whether it is performance and stability is not very good, The advantage of Logstash is the simple configuration, this time I chose the Rsyslog

Today this haproxy log, I put the whole process to everyone go through, even if it is to let everyone understand the next

The specific process is as follows

Haproxy----LOCAL2 level----RSYSLOG----Kafka---collector (consumption Kafka, write to es)---es---kibana show

1.haproxy log By default requires a combination of rsyslog

Configuration files such as

local2.*/data1/logs/haproxy/haproxy.log

If we don't rsyslog, we'll have this situation with Logstash.

We still have to write the path of /data1/logs/haproxy/haproxy.log in input, so I think there's a big impact on performance .

But we used the rsyslog,rsyslog can directly from the LOCAL2 level directly split the log, this block I just for example, Rsyslog and logstash the difference is that rsyslog need plug-ins, and Rsyslog link Kafka need V8 version, and not yum install Rsyslog, need to compile the time to load module for Kafka


Prior to preparation, we need to enable Rsyslog to send data to Kafka first.

How to make Rsyslog support push to Kafka, steps below

/opt/test.sh

# # Install Rsyslog with Omkafka.

# # Omkafka enables Rsyslog to push logs to Kafka, a distributed message system.

# # See http://www.rsyslog.com/doc/master/configuration/modules/omkafka.html

# # This installation the use of Yum to manage packages.

# # Add Rsyslog Repo

work_dir=$ (PWD)

Cd/etc/yum.repos.d

wget Http://rpms.adiscon.com/v8-stable/rsyslog.repo-O Rsyslog.repo

CD $WORK _dir

mkdir Rsyslog-install

CD Rsyslog-install

# Check Rsyslog version

# Rsyslog supports Kafka from v8.7.0

old_rsyslog_ver=$ (rsyslogd-version |head-n 1 | awk ' {print $} ')

# # Install Rsyslog dependency:libestr

Yum Install-y libestr-devel

# # Install Rsyslog Dependency:libee

Yum Install-y libee-devel

# # Install Rsyslog dependency:json-c

Yum Install-y json-c-devel

# # Install Rsyslog Denpendency:uuid

Yum Install-y libuuid-devel

# # Install Rsyslog Denpendency:liblogging-stdlog

Yum Install-y liblogging-devel

# # Install Rsyslog Denpendency:rst2man

Yum Install-y python-docutils

# # Install Librdkafka for Omkafka

wget Https://github.com/edenhill/librdkafka/archive/0.8.5.tar.gz-O librdkafka-0.8.5.tar.gz

Tar zxvf librdkafka-0.8.5.tar.gz

CD librdkafka-0.8.5

./configure

Make

Make install

Cd..

# # Install Rsyslog

wget Http://www.rsyslog.com/files/download/rsyslog/rsyslog-8.8.0.tar.gz-O rsyslog-8.8.0.tar.gz

Tar zxvf rsyslog-8.8.0.tar.gz

Export pkg_config_path=/usr/lib64/pkgconfig:/lib64/pkgconfig/

old_executable_path=$ (which rsyslogd)

executable_dir=$ (dirname "$old _executable_path")

CD rsyslog-8.8.0

./configure--prefix=/usr/local/rsyslog--sbindir= $executable _dir--libdir=/usr/lib64--enable-omkafka

Make

Make install

# # Show Installation Result:

new_rsyslog_ver=$ (rsyslogd-version |head-n 1 | awk ' {print $} ')

echo "Old Rsyslogd version:" $old _rsyslog_ver

echo "New rsyslogd version:" $new _rsyslog_ver

echo "Executable:" $ (which rsyslogd)

# # References:

# # Http://www.rsyslog.com/doc/master/installation/install_from_source.html

# # http://bigbo.github.io/pages/2015/01/21/syslog_kafka/

# # Http://blog.oldzee.com/?tag=rsyslog

# # http://www.rsyslog.com/newbie-guide-to-rsyslog/

# # Http://www.rsyslog.com/doc/master/configuration/modules/omkafka.html

2.CP./rsyslog-install/librdkafka-0.8.5/src/librdkafka.so.1/lib64/

chmod 755/lib64/librdkafka.so.1

3.CP./rsyslog-8.8.0/plugins/omkafka/.libs/omkafka.so/lib64/rsyslog/

chmod 755/lib64/rsyslog/omkafka.so

4.rsyslogd-n Test rsyslog configuration file is correct

5./lib64/rsyslog/below are the modules that Rsyslog can load

OK, now we can get Rsyslog to Kafka.

We're going to define the data in advance to the fields in the ES index, and we'll process the logs based on these fields.

Rsyslog for haproxy log processing, of course, I post the processing method below is not the best, Rsyslog also support Logstash filter Grok has its own similar plug-in, with Mmnormalize plug-in more efficient

650) this.width=650; "Src=" http://set2.exmail.qq.com/cgi-bin/viewfile?f= dfaf6c355be1606a4a0fe026c5e48770d0b1cffbc6174cc4830fc741f3e99d1013efa16c8a1ceb7edcf6c97be9079c85866f27cfadec6d673faad9705 12114170b00bf6470e1142d7bf60cef9bdf36a8ff2d9ff7f5f84c916014643793fbd7f0&usewmcache=1&sid= vyhifsbskcmt00gf,7 "style=" border:none;vertical-align:middle; "alt=" viewfile?f=dfaf6c355be1606a4a0fe026c5e48 "/ >

We've now done the right split for the logs we need to get to ES.

We're going to Kafka the data in the next room.

local2.* Action (type= "Omkafka" broker= "[ 172.16.10.130:9092,172.16.10.131:9092,172.16.10.132:9092,172.16.10.139:9092,172.16.10.140:9092] "topic=" EAGLEYE _log_haproxy_channel "partitions.number=", "confparam=[", "Compression.codec=snappy", "s

Ocket.keepalive.enable=true "] queue.saveonshutdown=" on "queue.size=" 10000000 "queue.type=" LinkedList " queue.highwatermark= "600000" queue.lowwatermark= "20000" queue.discardmark= "800000" queue.maxfilesize= "1g" Queue.maxdiskspace= "10g" acti

On. Resumeinterval= "Ten" action. Resumeretrycount= "-1" action.reportsuspension= "on" action.reportsuspensioncontinuation= "on" template= "Json_lines" )

The specific parameters can be viewed on the official website.

Let's go see if Kafka's topic Eagleye_log_haproxy_channel has data coming up.

650) this.width=650; "Src=" http://set2.exmail.qq.com/cgi-bin/viewfile?f= Dfaf6c355be1606a4a0fe026c5e48770d0b1cffbc6174cc4830fc741f3e99d1013efa16c8a1ceb7edcf6c97be9079c85866f27cfadec6d67e710fda54 7fba17f5ece28939338a822f94669847e6499846d453087ce2b66877f6a9f17eb1c0e88&usewmcache=1&sid= vyhifsbskcmt00gf,7 "style=" border:none;vertical-align:middle; "alt=" viewfile?f=dfaf6c355be1606a4a0fe026c5e48 "/ >

You can see that the data has been written in Kafka.

Now we're going to write the code for the collection end.

When it comes to the collection end, it's actually the consumer and inserting it into ES.

Code, I'm not going to post it.

The approximate logic is to start with 2 separate threads a thread is responsible for consuming Kafka, a thread calls ES API or the _bulk method of the RESTful interface to bulk insert JSON data

Let's see if ES already has data.

650) this.width=650; "Src=" http://set2.exmail.qq.com/cgi-bin/viewfile?f= dfaf6c355be1606a4a0fe026c5e48770d0b1cffbc6174cc4830fc741f3e99d1013efa16c8a1ceb7edcf6c97be9079c85866f27cfadec6d67cffa90367 23a06020d54139057cc7602199cd89faf2ccd55c84ce4a63b1350262623f3fd0068eecd&usewmcache=1&sid= vyhifsbskcmt00gf,7 "style=" border:none;vertical-align:middle; "alt=" viewfile?f=dfaf6c355be1606a4a0fe026c5e48 "/ >

OK now there's data in the ES cluster.

Now let's go to Kibana and add the chart.

How to add I'm not here to say, let's show you some pictures

Kibana I don't show it because there's most grateful information.



Can see the total number of requests the average response time has those IP links haproxy haproxy call those IP those requests are not normal can be seen through the chart, there are some requirements I will add to the Kibana in the future

So the whole process went down, we are now the majority of others to give us the demand, we just go to configure, but we can give ourselves some of the needs, we come to analyze some of the log information we think useful, haproxy log analysis, I think is an example for everyone to reference, thank you


This article from "Expect volume synchronization data" blog, declined reprint!

In mission 800 operation and Maintenance summary of Haproxy---rsyslog----Kafka---Collector--es--kibana

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.