Elastic Stack Product Brief
- Installing jdk1.8
- Download Install ES
- Run: Bin/elasticsearch
Download Tar package & unzip & Run
Basic information, cluster name, Lucene version.
Elasticsearch Configuration Instructions:
The configuration file is located in the Config directory:
-Related settings for elasticsearch.yml es
-Parameters related to Jvm.options JVM
-Log4j2.properties log-related configuration
Memory size, some parameters. Log output.
-Xms2g-Xms2g
Memory is not very large can be changed to small.
Mark
YML Key Configuration instructions:
- Cluster.name the cluster name as a criterion for whether the same set of groups
- Node.name node name as a condition for different nodes in the cluster
- Network.host/http.port network address and port. For use with HTTP and transport services
- Path.data Data Storage Address
- Path.log Log Storage Address
ElasticSearch Configuration Instructions:
Development and Production Mode description
- Whether the address of the transport is bound to localhost is the standard for judging Network.host
- Development mode download starts with a warning prompt configuration check exception
- Production mode prompts for an error to configure the exception and exits when it starts
Checks are made before starting. If you change to a real IP, these checks
The second way of parameter modification;
bin/elasticsearch -Ehttp.port=19200
modifying configurations with the-e parameter name
ElasticSearch start the cluster locally.
bin\elasticsearchbin\elasticsearch -E http.port=8200 -E path.data=node2bin\elasticsearch -E http.port=7200 -E path.data=node3
See if it is a cluster through an interface
8200/_cat/nodes?v
Mark
*
Represents the Master node.
8200/_cluster/stats
You can see a lot of details about the cluster.
Kibana Installation and operation
Download and install Kibana
Run
bin/kibana
Download the same version as ES. Kibana is developed by Nodejs, so download the corresponding version.
Go to config directory to modify the instructions for Es.url in KIBANA.YML.
Use port 5601 by default
Kibana Configuration Detailed
- Configuration is located in the Config folder
KIBANA.YML Key Configuration Instructions
- Server.host/server.port the address and port used to access the Kibana.
If you want the extranet to be accessible, you need to modify it to the specified port.
- Elasticsearch.url Address to visit elasticSearch
Kibana Common Function Description
- Discover Data Search View
- Visualize chart making
- Dashboard Dashboard makes a comprehensive presentation of the chart
- Advanced visual analysis of Timelion time series data (write some query languages)
- Devtools Developer Tools
- Management configuration Management
ElasticSearch Common terms
- Document documents Data
- Index (database in Nysql)
- Type of data in the type index
- The field field, the document's properties.
- Query syntax for querying DSL
ElasticSearch CRUD
- Create document
- Read reading a document
- Update Updates Document
- Delete Deletes a document
Mark
A document with ID 1 is created in the type person in index accounts.
The contents of the document are defined in the JSON format below. The name inside is all the fields of the document, the attributes of the entity.
Mark
Click Collapse in the lower left corner to make the interface a little more minimalist.
Mark
Gets the ID of the document through the document
Using Post/accounts/person/1/_update
Delete a document
DELETE /accounts/person/1DELETE /accounts
About ES queries
Mark
- Follow the query syntax directly after the URL of the query
q
- Query DSL queries in JSON format
Mark
GET /account/person/_search?q=johnGET accounts/person/_search{ "query": { "term": { "name":{ "value": "alfred" } } }}
The request is initiated in body form.
Reference docs in the ES website has instructions for query DSL.
Beats Getting Started
Lightweight data Shipper a lightweight transmitter
- Filebeat log files
- Metricbeat metric Data CPU Nginx Memory measurement information (visual presentation Analysis report)
- Packetbeat is primarily about network data.
- Winlogbeat Windows data
- Heartbeat Health Check
Filebeat Introduction
- Processing Flow:
- Enter input
- Processing filter
- Output outputs
Mark
Filebeat can be exported to many places, ES, Logstach, Kafka, Redis
Made up of prospector (Observer Explorers) & Harvester (Reaper)
A listener a reap. A harvest is launched for each file.
Filebeat Input Configuration Introduction
YAML syntax;
Mark
YAML is implemented for more than one prospector, which is an array -
.
input type: log
paths used to specify the path of the log.
You can configure multiple log paths. There are two types of input: One is log, one is stdin.
Filebeat Output Configuration Introduction
- Console & ElasticSearch & Logstash & Kafka & Redis & File
Mark
Output to Elasticsearch. If you have permission authentication for ES, you need to add a username and password.
Mark
Often output to the console during development. Pretty to do the formatting of JSON
Filebeat Filter Configuration Introduction
- Input time Processing: Include_lines & Exclude_lines & Exclude_files
I am enrolled in or do not read into this line. Excluded when file name is not compliant
- Output pre-processing processer
Drop_event satisfies a certain condition not output but throws away directly.
Drop_fields Throw away a field
Decode_json_fields the JSON format in the match field to do a parse
Include_fields add some fields, or just want to take some fields.
Mark
When we match to a regular expression. The message field begins with DBG. Just drop him off.
Mark
Inside the inner field is a JSON. But the inside did a string processing. Through the processing of processors
Mark
Filebeat + ElasticSearch Ingest Node
Filebeat lack of capacity for data conversion
5.0 Elasticsearch added ingest node
- The new node type
- The data is processed and transformed before the data is written to ES.
- Pipeline API
If you want to quickly store into ES
Write the appropriate filebeat configuration file and configure the pipeline. For Kibana configuration,
Official launch of Filebeat Module for out-of-the-box
Mark
Specifies the Nginx path. Use the Nginx module. Import Kibana.
Reference to best practices.
Filebeat collecting Nginx Log
- Collecting logs through stdin
- Output results via console
Demo Practice
Beats download. Filebeat Golang Development. Download the corresponding platform.
The data store reads the logs collected, executes the files, and the most complete configuration.
Module is the one we described above.
head -n 2 ~/Downloads/nginx_logs/nginx.log
Setting the input and output of the Filebea
Mark
Output:
Mark
head -n 2 ~/Downloads/nginx_logs/nginx.log|./filebeat -e -c nginx.yml
Using Filebeat to receive logs
Input two, filebeat output two. Each one of the Nginx logs becomes a JSON
Packetbeat Introduction
Real-time crawling of network packets, automatic parsing of application layer protocols
ICMP (v4 and V6) DNS HTTP MYSQL Redis
Wireshark is lightweight.
Packetbeat Parsing HTTP Requests
Parsing ElasticSearch HTTP Requests
Mark
Interfaces.device is to specify which network card to listen on
Linux can be specified as any but the Mac can only be specified as local lo0,
The protocol specifies the packet that crawls the HTTP protocol. Port 9200.
Send_request default false to record the body information for HTTP requests. It is easier to crawl information without opening it.
Record the data in JSON format in body.
output.console: pretty: true
Packetbeat Run
sudo ./packerbeat -e -c es.yml -strict.perms=false
The grab packet is required to have certain permissions, the configuration file's permission setting false will not be checked
Beats Getting Started
Packetbeat Run
Click Beats to download Packetbeat. Extract.
The overall structure is very similar. Es run up,
sudo ./packerbeat -e -c es.yml -strict.perms=false
Es.yml
Markmark
Configure input and output
Logstash Getting Started
Data Shipper
-ETL
-Extract
-Transform
-Load
Extract the transform and load. The data processing flow of the open source service side. Extracting data from multiple data sources at the same time,
The data is finally sent to the place where you want to store it.
Filter is where the Logstash far stronger than beats.
Grok is a regular syntax for converting non-formatted data to formatted data.
Mutate the structure of the data after the increase and deletion of the search.
A common data processing such as drop date.
Output: Can be exported to a variety of places.
Logstash Configuration
Process Flow--Input and Output configuration
Mark
You can use the command above for local quick commissioning,
Grok
A rich reusable pattern (pattern) based on regular expressions
Based on this, unstructured data can be structured for processing.
Date
Converts a Time field of a string type to a timestamp type for easy subsequent data processing.
Subsequent queries are made by time, grouped based on time.
Mutate
Processing for adding fields related to deletion modification substitution
Filter configuration Grok Example
Mark
Grok transform unstructured data into structured data.
%{ip:client} IP is a regular expression pattern that already exists. The following client is the field name after the structure.
Mark
Grok unstructured data into structured data.
Logstash Demo
Logstash is based on the JVM. Download unzip
Logstash Three configurations date converts a string to timestamp processing of a geo-processing user agent
head -n 2 ~/nginx_logs/nginx.log|bin/logstash -f nginx_logstatsh.conf
Combat: Analyze Elasticsearch query statements.
Goal:
- 收集ElasticSearch集群的查询语句- 分析查询语句的常用语句、响应时长等
Scheme:
- 应用 Packetbeat + Logstash 完成数据收集工作- 使用 Kibana + ElasticSearch 完成数据分析工作
Mark
monitoring cluster; production cluster business cluster ES query statement situation
Packetbeat Monitor the port of ES. To count search query statements
Send to Logstash,logstash to do some processing, storage to the monitoring cluster.
Visualize by monitoring the kibana of the cluster.
Scheme:
-Production Cluster
ElasticSearch 9200
Kibana 5601
-Monitoring Cluster
ElasticSearch 8200
bin/elasticsearch -Ecluster.name=sniff_search -Ehttp.port=8200-Epath.data=sniff
Kibana 8601
bin /kibana -e http://127.0.0.0.1:8200 -p 8601
Generating clusters and monitoring clusters cannot be the same, causing too much stress and falling into a dead loop.
Mark
The input to the Beats is received on port 5044. If there is a search keyword in the request, it will be processed.
Extracts the query statement from the request. and the query to which index is made.
Do some complementary operations. Output only makes query statements.
Mark
Packetbeat output output to 5044
The author's practice:
My machine has run ES + Kibana a set of search engine business clusters.
Markmark
Run on Ports 5601 and 9200, respectively.
The monitoring log cluster and Kibana are started at this time
Markmark
Because of the lazy and afraid of insufficient memory, the above monitoring cluster only runs a master node.
Modify the Kibana configuration file
Markmark
At this point Kibana 5602 es 9201
Mark
Start Logstash using files first sniff_search.conf
Set the Beats listening port 5044 & Output The file to the monitoring cluster 9201
bin/logstash -f sniff_search.conf
Mark
I use the Windows command as above.
Prepare Packctbeat
Markmark
./packerbeat -e -c es.yml -strict.perms=falsepackerbeat.bat -e -c sniff_search.yml -strict.perms=false
Run an error, unable to find Wpcap, install. Https://www.winpcap.org/install/default.htm
Once the index has been added by management, you can view the data based on Discover.
Visualize Create pie charts and more.
Kibana use.
Summary and Suggestions
Mark
Learn to check and search for official documents
Learn to ask questions correctly in the community
Learn how to broaden your horizons-Elastic daily