Jaeger supports ES as back-end storage, which is convenient for queries and system extensions
Run with Docker-compose
Environment preparation
Reference project: Https://github.com/rongfengliang/nginx-opentracing-demo
Docker-compose file
Version: ' 3 ' services:nginx:image:opentracing/nginx-opentracing networks:trivial_example:aliases: -Nginx volumes:-./nginx.conf:/etc/nginx/nginx.conf-./jaeger-config.json:/etc/jaeger-config.json Expose:-"8080" Ports:
Turn yourself to the blog: http://zhaoyanblog.com/archives/687.htmlElasticseach data Shard Shard, after the index is created, will not be changed during the life cycle, so when the index is created, it is reasonable to set shard number according to the estimated data size. In the cluster to let shard evenly distributed, can effectively balance the load of the cluster, so we should try to ensure that the Shard in the cluster distribution evenly.Each shard has its own number, from 1 onwards. You c
In the process of building Elasticsearch database, the first use of its recommended Logstash tools to import data, but it is very uncomfortable to use, so you want to use Perl good regular expression to filter the data classification, and then import Elasticsearch, So search Cpan found the Search::elasticsearch module.The module on the cpan of the document written relatively concise, so the use of experience in the process is summarized as follows:One, data write:Use Search::elasticsearch;my $e
-s elasticsearch# cd /data/elasticsearch/bin# ./elasticearch
Check whether startup is enabled
SimpleCurl Test
# curl http://172.16.220.248:9200
3. Install Logstash and filebeat
Filebeat is used to obtain data on each server, send the data to logstash, and then process the data by logstash.
3.1 install log
Stash
# tar -zxvf logstash-5.6.3.tar.gz# mv logstash-5.6.3 /data/logstash
3.2 Install file
Beat
Download and start filebeat, and use
To be able to process date fields into dates, manipulate numeric fields into numbers, process string fields into full text (Full-text) or exact string values, Elasticsearch need to know what type is contained within each field. The information for these types and fields is stored (contained) in the mapping (mapping).Elasticsearch supports the following simple field types:
type
String
string
Whole number
byte,short,integer,long
The index can be initialized before the index is created, such as specifying the number of shards and the number of replicas. Library is the name of the indexCurl-xput ' http://192.168.1.10:9200/library/'-d ' {"Settings": {"Index": {"Number_of_shards": 5,"Number_of_replicas": 1}}}‘Curl-xget ' Http://192.168.1.10:9200/library/_settings 'Curl-xget ' Http://192.168.1.10:92
Install VirtualBox
Brew Cask Install vagrant
vagrant File
A vagrant file to describe a requirement for using a Ruby DSL virtual machine environment. When describing the Docker container, vagrant makes each container seem to be using its own unique virtual machine. In fact, this is an illusion, because each Docker container is actually allowed on a variety of proxy virtual machines.
Therefore, two vagrant files are very necessary, a file is used to define the proxy virtual machine (Prov
First, Elasticsearch participle
In the Elasticsearch, the Chinese participle is supported, but all the participle is in accordance with the word word, such as the standard word breaker standard, you can follow the way to query how to do participle
Http://localhost:9200/iktest/_analyze?prettyanalyzer=standardtext= People's Republic of China
The above example uses the standard to carry on the participle, the participle result is as follows:
{"token
are as follows:
For example, the/home/husen/config/directory has
//in1.conf, in2.conf, filter1.conf, filter2.conf, out.conf these 5 files
//We use/ Logstash-5.5.1/bin/logstash-f/home/husen/config boot Logtstash
//logstash automatically loads this 5 configuration file and merges it into 1 whole profiles
2, Logstash multiple configuration files in the input, filter, output is independent of each other
The answer: NO.
Like what:
# # IN1.CONF content is as follows:
input{
file{
path=>
Elasticsearch Learning Notes (iv) Mapping mapping
Mapping Brief IntroductionElasticsearch is a schema-less system, but does not represent no shema, but rather guesses the type of field you want based on the underlying type of JSON source data. Mapping is similar to a data type in a static language in Elasticsearch, but the mapping has some other meaning than the data type of the language. Elasticsearch guesses the field mappings you want based on the underlying type of the JSON source data. Con
' } ' >> Composer.json
Echo '} ' >> Composer.json
./composer.phar Install
First of all, analogy relational database:
Relational DB-> Databases-> Tables-> Rows-> Columns elasticsearch-> Indices-> Types- Gt Documents-> FieldsElasticsearch clusters can contain multiple indexes (indices) (databases), each of which can contain multiple types (types) (tables), each containing multiple documents (documents) (rows), and then each document contains multiple fields (Fields) (columns).
To cr
:172.17.203.210
2.3 Elasticsearch Common plug-in installation
Head:is the cluster management tools, data visualization, and the search tool for adding and pruning.
# installation Command
./bin/plugin Install Mobz/elasticsearch-head
Access path: http://localhost:9200/_plugin/head/- Kopf: is a elasticsearch management tool that also provides APIs for ES cluster operations.
# installation Command
./bin/plugin Install Lmenezes/elasticsearch-kopf
Acces
Tags: trace rip output geography hosts match Redis Open archThis article is included in the Linux operation and Maintenance Enterprise Architecture Combat SeriesI. Collect custom logs from the cutting companyMany companies ' journals are not consistent with the default log format for services, so we need to cut them.1. Sample logs to be cut2018-02-24 11:19:23,532 [143] DEBUG Performancetrace 1145 Http://api.114995.com:8082/api/Carpool/QueryMatchRoutes 183.205.134.240 null 972533 310000 TITTL00
1. Preparatory workDownload Elasticsearch version number 2.3.4 https://www.elastic.co/downloads/past-releases/elasticsearch-2-3-4, download the package required to synchronize the database https:// codeload.github.com/jprante/elasticsearch-jdbc/tar.gz/2.3.4.0, download ik chinese word https://github.com/medcl/ Elasticsearch-analysis-ik/releases/download/v1.9.4/elasticsearch-analysis-ik-1.9.4.zip2. Running ElasticsearchExtracttar vxf elasticsearch-2.3. 4. TarThe Elasticsearch is ready to start in
Write in front: In doing Elk logstash processing MySQL slow query log when the problem: 1, the test database does not have slow log, so there is no log information, resulting in ip:9200/_plugin/head/interface anomalies (suddenly appear log data, deleted the index disappeared) 2, Processing log script Problem 3, the current single-node configuration script file/usr/local/logstash-2.3.0/config/slowlog.conf "Verbose script file see last" output {elastics
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.