The article "Small and medium-sized team quickly build SQL Automatic Audit system" We completed the automatic audit and implementation of SQL, not only improve the efficiency is also by the colleague's affirmation, the heart flattered. But the collection and processing of slow queries also cost us too much time and effort, how can we improve efficiency in this piece? and see how this article explains how to use elk to do slow log collection
Elk Introduction
Elk is the earliest Elasticsearch (
Tags: elkThis article introduces a slow query log that collects MySQL by using filebeat, Logstash parsing and pushes to Elasticsearch, and creates a custom index, which is ultimately displayed through Kibana Web.Environment Introduction:Operating system version: CentOS Linux release 7.3.1611 (Core) 64bitMySQL version: 5.6.28Logstash version: Logstash 5.3.0Elasticsearch version: Elasticsearch 5.3.0Kibana version: K
These three functions are log collection, index and search, and visualized display.
L logstash
This architecture diagram shows that logstash is only the place where collect and index are located. A. conf file is input during runtime, And the configuration is divided into three parts: input, filter, and output.
L redis
Redis serves as a decoupling between log collection and indexing.
L elasticsearch
Core Component used for searching. Main features: Real-Time, distributed, highly available, docum
1. Overview
In a distributed cluster environment, the log content of a single node tends to be stored on its own node, which has many problems. We need a unified log processing center to collect and centrally store logs, and to view and analyze them. The Twelve-factor app has recommendations for log processing.
The corresponding processing technology is now very mature, usually using elastic Search + logstash + Kibana technology Stack (ELK). In this a
Original address: http://blog.chinaunix.net/uid-11065483-id-3654882.htmlBecause the company needs to monitor the line record of QQ, originally used the structure of the light +panabit+splunk to do record. Panabit use is quite comfortable, but when the day of the Splunk log records more than 500MB, Splunk free version can no longer use, which makes me very depress
false Based on the captured host names.Run the following bash command to obtain the 100 files prefixed with _ rdns.For file in *; do python rdnslookup. py $ file; doneIn each file, we can see the results of pointing to records and true/false judgments.WHOIS QueryBefore performing a WHOIS query, we need to use the data obtained during host query.In this section, we want to capture the description field in the WHOIS information. After WHOIS and DNS reverse queries, we have the ability to match IP
The CFileLog log record format of YII is rewritten. the log record format of yii is a string, which is difficult to index and classify in some log analysis systems, such as splunk. The typical yii log format is as follows:
The date, level, category, and message information are mixed together. it is difficult to analyze the main message. splunk is json-friendly and will format json into an array, we co
the ending category name. If a category name has the same prefix as the category name, the category name matches the category name.Message format
If you use the log targets of the yii \ log \ FileTarget class, your message format should be the following ~
2014-10-04 18:10:15 [::1][][-][trace][yii\base\Module::getModule] Loading module: debug
By default, log messages are formatted in the following format: yii \ log \ Target: formatMessage ():
Timestamp [IP address][User ID][Session ID][Severity
Build a docker environment for the Distributed log platform from the beginning and build a docker
In the previous article (spring mvc + ELK build a log platform from the beginning), we will share with you how to build a distributed log Platform Based on spring mvc + redis + logback + logstash + elasticsearch + kibana, it is operated on the windows platform. This article mainly involves all these software environments in linux + docker.
Our goal is t
ELK is a combination of Elasticsearch Logstash Kibana;Here is a simple how to install under the centos6.x system, follow-up write how to use these software;This is based on the official website recommended using Yum method installed;1. ElasticsearchRPM--import Https://packages.elastic.co/GPG-KEY-elasticsearcCat/etc/yum.repos.d/elsticsearch.repo[Elasticsearch-2.x]name=elasticsearch repository for 2.x packagesbaseurl=http://packages.elastic.co/elasticse
configurationVim _site/app.js#localhost替换为IP地址This.base_uri = This.config.base_uri | | This.prefs.get ("App-base_uri") | | "Http://10.2.151.203:9200";7.) Start GruntGrunt Server#如果启动成功, you can run directly in the background and the command line can continue typing (but if you want to quit, you need to kill the process yourself)Grunt Server Nohup Grunt Server Exit #后台启动#启动提示模块未找到
> Local Npm Module "Grunt-contrib-jasmine" not found. Is it installed?NPM Install Grunt-contrib-jasmine #安
1. No log Analysis System 1.1 operation and maintenance pain points1. Operations are constantly looking at various logs.2. The fault has occurred before looking at the log (time issue. )3. Many nodes, log scattered, the collection of logs became a problem.4. Run logs, errors and other logs, no specification directory, collect difficulties.1.2 Environmental Pain Points1. Developers cannot log on to the online server to view detailed logs.2. Each system has a log, log data scattered difficult to f
because the company's Elasticsearch cluster only uses two servers. As long as one server data loss Elasticsearch will lose half the data. Therefore, the backup and recovery of data is very important. Elasticsearch snapshot and recovery modules can create a single index or a snapshot of the entire cluster to a remote repository for data backup and recovery. The following is a backup recovery. Kibana Index as an exampleData backup and Recovery1. Modify
ELK real-time log platform web User ManualDuring this time, the company launched a new product line. By deploying elasticsearch + logstash + kibana, the company can view logs in real time and open access interfaces to open access personnel, this frees O M from the boring log query work. The biggest highlight of the ELK platform is that you can use keywords to locate the problematic physical server and time segment, which is quite practical in the clu
Step by step1. Download the SoftwareElasticsearch:https://download.elasticsearch.org/...p/elasticsearch/2.0.0/elasticsearch-2.0.0.zipLogstash:https://download.elastic.co/logstash/logstash/logstash-2.0.0.zipKibana:https://download.elastic.co/kibana/kibana/kibana-4.2.0-windows.zip2. Unzip the downloaded software separately, Elasticsearch,logstash,
Docker if you need to manage the image from a non-SSL source, you need to configure the Insecury-registry parameter for the Docker profile, typically modifying its configuration file in the following location:*/etc/sysconfig/docker*/etc/init.d/dockerBy default, the Insecure_registry parameter is commented out and can be configured to target non-SSL-encrypted Docker Registry, as needed, for example:Insecure_registry= '--insecure-registry 10.XX. Xx. xx:5000 'If you need to explicitly label multipl
the new record. The other is the precision of manual lifting timestamp, up to microseconds, in theory can support 86,400,000,000 of non-repeating log each day, can greatly avoid the overlap of timestamps, the configuration is as follows:
Business log output time stamp formatted to microseconds: 2006-01-02t15:04:05.999999z07:00Logstash filter based on timestamp conversionFilter {Ruby {Code = "Event.set" (' Time ', (Time.parse (Event.get (' time)). to_f*1000000). To_i) "}}
6. Data displayGrafan
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.