Original link: https://yq.aliyun.com/articles/57420
Absrtact: Elk is the abbreviation of elastic Search, Logstash and Kibana. Elastic Search As the name implies is committed to searching, it is a flexible search technology platform, and similar to have SOLR, the comparison of the two can refer to the following article: Elastic Search and SOLR selection summary is, If you do not like nightclubs or loyal and reliable wives, then choose elastic Search is correct, and he has a little bit
Elk is the abbreviation for elastic search, Logstash and Kibana.
Elastic Search, as the name implies, is dedicated to searching, it is an elastic search technology platform, and similar to have SOLR, the comparison of the two can refer to the following article:
Elastic Search and SOLR selection
Summing up is, if you do not like nightclubs or loyal and reliable wife, then choose Elastic Search is correct, not to mention he has a little beauty. There are also many cases of using ES, such as the Git hub that the IT stud loves and my d hate wiki.
Logstash, can also be as the name implies, stash means hiding place, so ... In fact, is not completely accurate, logstash is used to do the log collection, hiding almost not by his tube, hiding by who tube, smart crossing sure can guess is es. But this is not entirely accurate, in the elk operation environment is definitely ES did not run, but logstash support a variety of output sources, including Redis,s3,mongodb and Kafka, even for the feelings of powder (no, people like bayonets, do not guns, fight the man) it, Also thoughtful support for HTTP remote write file scheme. In short we can think of, the author also thought, we can not think of, the author also thought, if there is not written by the author, it does not matter, you can write the plugin yourself. What the? You can't write it? Do not write out of the blind BB, find a ready-to-use. ES also supports a variety of input sources, from basic stdin to file to Redis to ...
Kibana, uh, lamb patty. Estimate the author is a foreign devil Beijing feelings Honey Bar, code to write more, look at the hair off a place, think if you can eat the east to shun the copper pot mutton how good, and then take the ghost name it. Kibana is mainly used for ES analysis and querying. Elk originally completely can not take him, with ES Head and Bigdesk plug-in is also good, but Kibana management and query is indeed a lot of convenience, people, there are guns with why not use bayonets, silently think of the landlord's former owner.
The Nurse-form Druid is logstash,agent just a role logstash assume, and this corresponds to indexer. The agent separately outputs the logs obtained as input to a message agent (such as Redis or Kafka), and indexer the message agent as input and then outputs it to ES, which is indexing by ES. In fact, Logstash in the entire process is only responsible for the input and output, for Logstash there is no agent and index points. If the log volume is small, you do not need to set indexer, directly the ES as the agent's output source.
Elastic Search
Environment Preparation:
Jdk7 and above, Logstash2.1 explicitly only support JDK7.
Elastic Search, Kibana, Logstash
ES cannot be started with root privileges.
The environment of the landlord;
10.0.250.90 9200 9300
10.0.250.90 9201 9301
10.0.250.90 9200 9300
The first port is the interface that provides the HTTP service externally, and the second port is the port of the cluster exchange protocol.
cd /opt/elasticsearch-2.1.0/configvim elasticsearch.yml
Take 10.0.250.90 9200 9300 as an example.
Modify the cluster name, ES by default broadcast Discovery node, all the nodes declared in the broadcast as the same cluster name will be automatically added to the cluster.
cluster.name: es
Set node name
node.name: es-node-2````此外可以通过node.master设置节点是否可以被推举为leader以及node.data设置节点是否保存索引数据。节点的定制化属性信息,我把三个节点设成r1-r3<div class="se-preview-section-delimiter"></div>
Node.rack:r2
数据存储目录<div class="se-preview-section-delimiter"></div>
Path.data:/usr/local/data/
日志目录,默认生成的日志文件是${cluster.name}_xxx,可以通过logging.yml修改<div class="se-preview-section-delimiter"></div>
Path.logs:/var/log/es/
启动时是否就锁住内存,es是基于java,遵循java的内存回收,java内存分配一般会设定一个最小值(-Xms)和一个最大值(-Xmx),这样虽然能在空闲时节约内存,但是却会带来频繁的gc。所以es建议在启动时就锁定内存,并且es建议将主机一半可用内存分配给它。<div class="se-preview-section-delimiter"></div>
Bootstrap.mlockall:true
绑定网络地址,默认是127.0.0.1。可以通过transport.tcp.port设置项设置几圈交换协议的端口。<div class="se-preview-section-delimiter"></div>
network.host:10.0.250.91
http.port:9200
1. 设置初始化时的发现列表,新节点会通过单播的方式发现列表里的节点。2. 设置有master资格的节点最小个数,低于这个个数可能会发生脑裂。es建议的个数是所有节点数/2+1。所谓脑裂是指集群中有多个active的master节点,这样就导致多个节点接管集群服务。举个例子:假设A,B,C三个节点都有master资格,A节点为active,其他未standby,但是A节点出了故障,这时B,C就要进行投票选出新的leader,此时的结果就可能会是B和C都互有两票,量个节点都分别active。这里只是描述脑裂成因,和ES的脑裂成因并不完全一样,ES的投票机制应该是类似于redis先到先得的那种。不过配置上最好遵从es的建议。<div class="se-preview-section-delimiter"></div>
Discovery.zen.ping.unicast.hosts: ["10.0.250.90:9300", "10.0.250.91:9300"]
Discovery.zen.minimum_master_nodes:2
配置完以后,启动elastic<div class="se-preview-section-delimiter"></div>
Bin/elasticsearch
Curl http://10.0.250.91:9200
Curl Http://10.0.250.91:9200/_nodes
通过curl命令查看下节点状态以及整个集群的节点状态。推荐安装head插件,可以查看集群状态<div class="se-preview-section-delimiter"></div>
Bin/plugin Install Mobz/elasticsearch-head
bigdesk插件在当前版本上无法使用,因为ES通过Rest接口返回的json数据不标准,jquery会出parse错误。通过浏览器访问http://10.0.250.91:9200/_plugin/head![节点状态](http://img.blog.csdn.net/20151226175252443)<div class="se-preview-section-delimiter"></div>## Kibana ##Kibana的安装也很简单,下载最新版本并安装。我安装在251主机上。<div class="se-preview-section-delimiter"></div>
Vim Config/kibana.yml
设置Kibana端口<div class="se-preview-section-delimiter"></div>
server.port:5601
设置提供rest查询服务的ES节点,设置了后Kibana就会通过这个节点查询信息了。<div class="se-preview-section-delimiter"></div>
Elasticsearch.url: "http://10.0.250.90:9200"
设置Kibana自用索引,主要用来存储Kibana保存的一些内容,例如查询信息,报表等<div class="se-preview-section-delimiter"></div>
Kibana.index: ". Eslogs"
启动Kibana<div class="se-preview-section-delimiter"></div>
Bin/kibana
访问Kibana,第一次使用时会让你建logstash的索引规则,默认为logstash-*,*代表日期,每天会生成一个新的索引。<div class="se-preview-section-delimiter"></div>## Logstash ##Logstash的安装非常简单,下载最新版本并安装,我用的是2.1.1.<div class="se-preview-section-delimiter"></div>
Bin/logstash-e "
‘‘会默认以stdin作为输入,以stdout作为输出。我在控制台输入11111,控制台就输出了格式化的输出。<div class="se-preview-section-delimiter"></div>
11111
{
"Message" = "11111",
"@version" = "1",
"@timestamp" = "2015-12-26t10:00:23.422z",
"Type" = "stdin",
"Host" = "0.0.0.0"
}
接着,我们再测试一下以ES作为输出源,在ES根目录下:<div class="se-preview-section-delimiter"></div>
mkdir Config
Touch logstash-indexer.conf
Vim logstash-indexer.conf
修改logstash-indexer.conf定义输入源为stdin,输出为stdout以及ES.<div class="se-preview-section-delimiter"></div>
Input {
stdin{
Type=> "Stdin-input"
}
}
Output {
stdout {codec=> Rubydebug}
Elasticsearch {
hosts=> ["10.0.250.90:9200"]
Codec=> "JSON"
}
}
启动Logstash<div class="se-preview-section-delimiter"></div>
Bin/logstash-f config/logstash-indexer.conf-l/var/log/logstash/logstash.log
"'
Enter "Hello this a test message" and then enter
In Kibana, you can see the corresponding log information.
Create a visual centralized log with Elk