that you need to devote a lot of effort to the configuration to achieve a good presentation.Contents [Hide]
1 Basic Introduction
2 installation process
2.1 Preparation
2.2 Installing Java
2.3 Elasticsearch
2.4 Kibana
Basic IntroductionElasticsearch is currently the latest version of 1.7.1,Logstash is currently the latest version of 1.5.3Kibana is currently the latest version: 4.1.1Logstash forward
Type settings:The Redis plugin in Logstash specifies three ways to read the information in the Redis queue.
List=>blpop ( equivalent to queue )
Channel=>subscribe ( equivalent to a specific channel for publishing subscriptions )
Pattern_channel=>psubscribe ( equivalent to publishing a subscription to a group of channels )
Where list is the equivalent of a queue; a channel is equivalent to a specific channel for a subscription; Pa
The front-end time wrote an essay log4net. NOSQL +elasticsearch implements logging , because of project reasons need to integrate log root Java platform colleague integration using Logstash+kibana+elasticsearch+redis structure to achieve log statistics analysis, Therefore, a component that outputs Log4net logs to Redis is required. Did not find the ready-made, do it yourself. Reference to the log4net. NOSQL Code.Redis's C # client uses Servicestack
Tags: CTE nload. SQL ODI Line SQL ADE JDBC Remove input {stdin {} jdbc {#MySQL JDBC connection string to our backup databseJdbc_connection_string ="Jdbc:mysql://localhost:3306/userdb?useunicode=truecharacterencoding=utf-8usessl=false"
#The user we wish to excute our statement asJdbc_user ="User"Jdbc_password="Pass"
#The path to our downloaded JDBC driverJdbc_driver_library ="Mysql-connector-java-5.1.40-bin.jar"
#The name of the driver class for MySQLJdbc_driver_class ="Com.mysq
When elk is deployed, an error is reported when logstash is started.
Sending logstash logs to/var/log/logstash. log.Exception in thread "> output" org. elasticsearch. Discovery. masternotdiscoveredexception: waited for [30 s]At org. elasticsearch. Action. Support. master. transportmasternodeoperationaction $3. ontimeout (ORG/elasticsearch/Action/support/master/t
Grok-patterns contains log parsing rules for regular expressions with many underlying variables, including Apache log parsing (which can also be used for nginx log parsing). Based on Nginx log analysis configuration: 1. Configure the Nginx log format as follows: Log_format main ' $remote _addr [$time _local] "$request" $status $body _bytes_ Sent "" $http _referer "" "$request _time" '; access_log/var/log/nginx/access.log main; The Nginx log is screened, Remove unused logs. At this time, for the
Recently in the project using Logstash do log collection and filtering, feel logstash is still very powerful.Input {file{path = "/xxx/syslog.txt" Start_position = beginning codec = Multilin e{Patterns_dir = ["/xx/logstash-1.5.3/patterns"] pattern = "^%{message}" Nega Te = True what = "previous"}}}filter{mutate{split = ["message", "|"] Add_field = {"tmp" =
Sometimes we need to analyze some server logs and alarm the wrong logs, where we use Logstash to collect these logs and send error log data using our own developed mail delivery system.For example we have several files that need to be monitored (BI logs)We can collect these file logs by configuring Logstash
input{file{Path=> "/diskb/bidir/smartbi_prd_*/apache-tomcat-5.5.25_prd_*/logs/catalina.o
Mutate:http://www.logstash.net/docs/1.4.2/filters/mutateUse Logstash to extract the Ora error from the alter log of Oracle.The log format is as follows:ALTER DATABASE openerrors in file d:\oracle\diag\rdbms\hxw168\hxw168\trace\hxw168_ora_6148.trc:ora-01589: To open a database you must use the Resetlogs or Noresetlogs option ORA-1589 signalled During:alter database Open...alterLogstash content:input{file{codec=>plain{charset=> "CP936" #windows下的编码是cp9
Types in Logstash
Array
Boolean
bytes
Codec
Hash
Number
Password
Path
String
ArrayAn array can is a single string value or multiple values. If you specify the same setting multiple times, it appends to the array.Example"/var/log/messages""/var/log/*.log""/data/mysql/mysql.log"BooleanBull, True,false.ExampletruebytesA bytes field is a string field, that represents a valid unit of bytes. It is a convenient-t
Index fields are indexed using automatic detection in ES, such as IP, date auto-detect (default on), Auto-detect (default off) for dynamic mapping to automatically index documents, and when specific types of fields need to be specified, you might use mapping to define mappings in index generation.The settings for the default index in Logstash are template-based.First we need to specify a default mapping file, the contents of the file are as follows:{
1. Configure Log4j.propertiesLog4j.rootlogger=info,debug,logstashlog4j.appender.logstash= org.apache.log4j.net.socketappenderlog4j.appender.logstash.port=4560log4j.appender.logstash.remotehost= 10.0.0.5log4j.appender.logstash.reconnetiondelay=60000log4j.appender.logstash.locationinfo=true2. Modify the Logstash Input component (favblog-log4j.conf) to output the log to Elasticsearchinput{log4j{host = "10.0.0.5" mode = "Server" type = "Log4j-json" port =
Benefits of the unified collection of real-time logs:1. Quickly locate the problem machine in the cluster2, no need to download the entire log file (often relatively large, download time is much)3, the log can be countedA, to find the most frequently occurring anomalies, for tuning processingB, Statistics crawler IPC, Statistical user behavior, do cluster analysis, etc.Based on the above requirements, I adopted the ELK (Elasticsearch + Logstash + kiba
Logstash-forward source core ideas include the following roles (modules):Prospector: Find the file in the Paths/globs file below, and start harvesters, submit the file to harvestersHarvester: Read the scan file and submit the appropriate event to spoolerSpooler: As a buffer buffer pool, reach the size or counter time to the event information inside the flush pool to PublisherPublisher: Connect the network (Connect is authenticated by SSL), transfer th
Elasticsearch+logstash+kibana ConfigurationThere are a lot of articles about the installation of Elasticsearch+logstash+kibana, which is not repeated here, only some of the more detailed content.
Considerations for installing in AWS EC2
9200,9300,5601 Port to remember to open
Elasticsearch address do not write external IP, otherwise it will be a waste of data, write internal IP"ip-10-1
Benefits: The project log is written to Logstash and then sent to Elasticsearch, which makes it easy to view the search log, as well as report analysis.Logstash is a data acquisition tool, there are a variety of channels, such as files, TCP,UDP, etc., if it is to collect log files, you need to store files on the server, start a Logstash service, not easy to quickly deploy, and the way to adopt tcp/udp relat
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.