After elk ran up, my heart almost collapsed, 16G memory 16 core CPU also often error.
First, Logstash and Elasticsearch simultaneously error
Logstash a large number of error, it may be es occupy too much heap, not optimized ES caused by
Retrying failed action with response code:503 {: Level=>:warn}
Too many attempts at sending event. dropping:2016-06-16t05:44:54.464z%{host}%{message} {: Level=>:error}
Elasticsearch a large number of errors occurred
Too many open files
The value is too small. "Max_file_descriptors": 2048,
# Curl Http://localhost:9200/_nodes/process\?pretty
{
"Cluster_Name": "Elasticsearch",
"Nodes": {
"ZLGPZMQBROYDFVXOY27LFG": {
"Name": "Mass Master",
"Transport_address": "inet[/192.168.153.200:9301]",
"Host": "LocalHost",
"IP": "127.0.0.1",
"Version": "1.6.0",
"Build": "Cdd3ac4",
"Http_address": "inet[/192.168.153.200:9200]",
"Process": {
"Refresh_interval_in_millis": 1000,
"id": 943,
"Max_file_descriptors": 2048,
"Mlockall": True
Workaround:
Set File Open number
# Ulimit-n 65535
Set up boot from
# Vi/etc/profile
Add and restart the ES boot file elasticsearch
# Vi/home/elk/elasticsearch-1.6.0/bin/elasticsearch
Ulimit-n 65535
# Curl Http://localhost:9200/_nodes/process\?pretty
{
"Cluster_Name": "Elasticsearch",
"Nodes": {
"_qxvsjl9qogmd13eb6t7ag": {
"Name": "Ocean",
"Transport_address": "inet[/192.168.153.200:9301]",
"Host": "LocalHost",
"IP": "127.0.0.1",
"Version": "1.6.0",
"Build": "Cdd3ac4",
"Http_address": "inet[/192.168.153.200:9200]",
"Process": {
"Refresh_interval_in_millis": 1000,
"id": 1693,
"Max_file_descriptors": 65535,
"Mlockall": True
}
}
Second, out of memory overflow
Optimized ES configuration file content:
# egrep-v ' ^$|^# '/home/elk/elasticsearch-1.6.0/config/elasticsearch.yml
Bootstrap.mlockall:true
http.max_content_length:2000mb
Http.compression:true
Index.cache.field.type:soft
index.cache.field.max_size:50000
index.cache.field.expire:10m
For Bootstrap.mlockall:true also to set
# Ulimit-l Unlimited
# vi/etc/sysctl.conf
vm.max_map_count=262144
vm.swappiness = 1
# ulimit-a
Core file size (blocks,-c) 0
Data seg Size (Kbytes,-D) Unlimited
Scheduling Priority (-e) 0
File size (blocks,-f) Unlimited
Pending signals (-i) 127447
Max locked Memory (Kbytes,-L) Unlimited
Max memory Size (Kbytes,-m) unlimited
Open files (-N) 65535
Pipe Size (bytes,-p) 8
POSIX message queues (bytes,-Q) 819200
Real-time priority (-R) 0
Stack size (Kbytes,-s) 10240
CPU time (seconds,-t) unlimited
MAX User Processes (-u) 127447
Virtual Memory (Kbytes,-V) Unlimited
File locks (-X) Unlimited
# vi/etc/security/limits.d/90-nproc.conf
* Soft Nproc 320000
Root Soft Nproc Unlimited
Third, ES status is yellow
ES is represented in three color states: Green,yellow,red.
Green: All primary shards and replica shards are available
Yellow: All primary shards are available, but not all replica shards are available
Red: Not all primary shards are available
# Curl-xget Http://localhost:9200/_cluster/health\?pretty
{
"Cluster_Name": "Elasticsearch",
"Status": "yellow",
"Timed_out": false,
"Number_of_nodes": 2,
"Number_of_data_nodes": 1,
"Active_primary_shards": 161,
"Active_shards": 161,
"Relocating_shards": 0,
"Initializing_shards": 0,
"unassigned_shards": 161,
"Number_of_pending_tasks": 0,
"Number_of_in_flight_fetch": 0
Workaround: Build elasticsearch cluster ( Next blog write)
Four, Kibana not indexed error
https://rafaelmt.net/en/2015/09/01/kibana-tutorial/#refresh-fields
Kibana indexes are updated frequently based on events, so Kibana sometimes have errors that are not indexed:
Workaround:
We visit Kibana, then select Settings, click Indices, click logstash-*. Click the Refresh icon to OK
This article is from "Kaka West" blog, please be sure to keep this source http://whnba.blog.51cto.com/1215711/1794521
Optimization Elk (2)