Maintaining the Kle log collection system with fabric deployment

Source: Internet
Author: User
Tags kibana logstash elasticsearch kibana

Recently engaged in a Logstash Kafka Elasticsearch Kibana Integrated deployment of the log collection system. Deployment Reference Lagstash + Elasticsearch + kibana 3 + Kafka Log Management System Deployment 02

There are some links in the online process, it is still worth the attention of the people such as:

1, application operations and developers to discuss the definition of the log format,

2, in the Logstash log and consumption end Logstash consumption log hemp, filter log when how to be efficient, avoid service itself Gaocheng system pressure too big, if every day to deal with billions of log volume, performance is not pay attention, haha, can be hard to imagine, what consequences,

3,kafka and ES cluster configuration, monitoring, and restart services should be aware of things,

4, the current Kle log collection in real-time display comparison has the advantage, that the historical data, that is, the index in ES needs to clean up the old data, the index needs to optimize to improve the Kibana retrieval speed.

In the maintenance process also encountered some problems, such as batch deployment of production end Logstash, configuration files, update configuration file scripts, patrol service is normal, etc., so the use of fabric to get a simple management script, or very fun. Source code has been uploaded here, the following is mainly to some of the main functions and source to do a description, in order to see in a few months, but also remind yourself. Ha ha

The project directory is as follows;

├──bin│  ├──__init__.py│  ├──logstash_dev.py│  ├──test.txt│  └──update _config.sh├──branches├──conf│  ├──config.conf│  ├──__init__.py│  ├──__init__. pyc│  ├──setting.py│  └──setting.pyc├──config├──description├──file│  ├──config│& Nbsp; │  ├──appapi│  │  │  ├──logstash_shipper_production.conf│   │  │  └──logstash_shipper_production.conf.bak│  │  ├── consume│  │  │  ├──consume_fiter.txt│  │  │  ├── consume_input.sh│  │  │  ├──consume_input.txt│  │  │   ├──consume_output.txt│  │  │  ├──get_typelist.sh│  │   │  ├──get_typelist.txt│  │  │ &nBsp ├──logstash_indexer_consume.conf│  │  │  └──logstash_indexer_ Consume.conf.template│  │  ├──g1web│  │  │  ├──logstash_ Shipper_production.conf│  │  │  └──logstash_shipper_production.conf.template│   │  ├──houtai│  │  │  └──logstash_shipper_production.conf│&  nbsp; │  └──wapapi│  │  └──logstash_shipper_production.conf│  └── Logstash│  ├──install_logstash.sh│  └──logstashd

Description: Bin directory put execute script conf directory put Business Server information (config.conf), call profile (setting.py) file directory put need to push to terminal configuration file or install package

The Logstash_dev application logic is as follows:

1.logstash_dev description
./update_config.sh test.txt向config.conf 生成和更新env.roles需要业务组和主机信息./fab -f logstash_dev.py --shortlist  #可以列出目前定义的执行任务有那些如下:[email protected]:/var/www/fabric_project/logstash_dev/bin# fab -f logstash_dev.py --shortlist/var/www/fabric_project/logstash_devG1WEBJDK_DEPcheck_loadcheck_localhostjdk_checklogstash_checklogstash_productionlogstash_production_config_updatelogstash_servicelogstashd_update执行任务fab -f logstash_dev.py G1WEB   

As long as the python knows a little bit about the fabric principle logstash_dev.py is better defined, because the main is called the shell

2. setting.py the things in this is the easiest thing for me to forget.
#!/usr/bin/env python# Coding:utf8#author: [Email protected]FromFabric.colorsImport*FromFabric.apiImport*ImportReImportSys,OsImportConfigparser#sys. Path.append ('/var/www/python_program/fabric_project/ops_manager ')Nowdir=Os.Getcwd()BASEDIR=‘/‘.Join(Nowdir.Split(‘/‘)[:-1])ConfigFile="%s/conf/config.conf "%BASEDIRPwdfile="%s/conf/.ippwd.txt "%BASEDIRPrintBASEDIREnv.User=' Root 'Env.Roledefs={}Env.Passwords={}#下面主要是实现加载配置文件已解析成字典给env. roledefsDefHandle_conf_role():CONF=Configparser.Configparser()CONF.Read(ConfigFile)G=CONF.Items(' HostGroup ')ForGhInchG:Env.Roledefs[Gh[0]]=[]ForHInchGh[1].Split(‘,‘):Env.Roledefs[Gh[0]].Append(H)ReturnEnv.Roledefs#下面主要实现解决主机密码文件, parse into Env.passwords dictionaryDefHandle_host_pass():WithOpen(Pwdfile,' R ')AsF:ForLineInchF.ReadLines():ForG,HlistInchHandle_conf_role () . Items (): for h in hlist : b = re. Match (r ' ^%s * % h,line,re.< span class= "n" >m| Re. I) If b:c = B.group () k = C.split () [0] v = c.split () [1] nc= '%[email protected]%s:22 '% (env.user,k) ENV.PASSWORDS[NC] = v return env.passwordsif __name__ = = ' __main__ ': Print handle_conf_role () print handle_host_pass ()    
3. Post-ideas to achieve the management of Logstash Kafka zookeeper ES and other application profiles

Maintaining the Kle log collection system with fabric deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.