Elk after the deployment is complete, there are some problems that need to be adjusted and optimized.
1.elasticsearch adjusting heap Memory size:
Elasticsearch default memory is 1GB, for small business, so you need to allocate half of the machine memory to the JVM
To view system memory:
# free-m Total used free shared buffers cachedmem:24028 20449 3579 0 185 8151-/+ buffers/cache:12112 11916swap:0 0 0
The system allocates 12G to the JVM for the 24G plan
How to Modify
(1). Modify the ES_HEAP_SIZE environment variable
Vim/etc/profileexport Es_heap_size=12gsource/etc/profile
(2). Start the specified allocated memory
./bin/elasticsearch-xmx12g-xms12g
Reboot to see if the configuration is in effect:
/etc/init.d/elasticsearch restart
View:
650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M00/80/3A/wKioL1c750Hh0kFnAABXfVcRjGc858.png "title=" El2.png "alt=" Wkiol1c750hh0kfnaabxfvcrjgc858.png "/>
2. Configuration file Modification
Vim Elasticsearch.ymlootstrap.mlockall:true
Set to True to lock the memory. Because ES is inefficient when the JVM starts to swapping, make sure it does not swap, set the ES_MIN_MEM and ES_MAX_MEM environment variables to the same value, and ensure that the machine has enough memory allocated to ES. Also allow the Elasticsearch process to lock the memory.
3. Error: [[FIELDDATA] data too large, data for [proccessdate]
The information is described as follows:
The size of the fielddata is verified after the data has been loaded. What happens if the next query prepares to load the fielddata so that the buffers exceed the available heap size? Unfortunately, it will produce an oom exception.
A circuit breaker is used to control the cache load, and it estimates the amount of memory used for the current query request and limits it.
# curl -xget http://localhost:9200/_cluster/settings?pretty{ "Persistent"  : { "Indices" : { "Breaker"  : { "Fielddata" : { "Limit" : "40%" } }, "Store" : { "Throttle" : { "Max_bytes_per_sec" : "100MB" } } } }, "Transient" : { "Cluster" : { "Routing" : { "allocation" : { "Enable"  : "All" } } } }}
Description: The Fielddata breaker limits the size of the Fielddata, which by default is 60% of the heap size. Set here to 40%
4. Modify the Logstash refresh Intervel
The default setting of 5 seconds, according to business judgment too frequently, so want to change small:
# curl-xget http://localhost:9200/_template/logstash?pretty{"Logstash": {"order": 0, "template": "logstash-*" , "Settings": {"index": {"Refresh_interval": "5s"}},.....
Modified to 20 seconds:
cd/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.0.7-java/lib/logstash/outputs/ Elasticsearchvim elasticsearch-template.json{"template": "logstash-*", "settings": {"Index.refresh_interval": "2 0s "}, ....
Restart Service
/etc/init.d/logstash restart
5. Setting up Agents and certifications
Using Nginx reverse proxy kibana and authentication
#vim kibana.conf server { listen 80; server_name ckl.kibana.com; error_log /data/log/ Kibana_error.log; proxy_headers_hash_max_size 5120; proxy_ headers_hash_bucket_size 640; location / { proxy_pass http://0.0.0.0:5601; auth_ basic "Restricted"; auth_basic_user_file /usr/local/ Nginx/conf/ssl/site_pass; proxy_set_header connection ""; proxy_set_header host $host; proxy_set_header X-Real-IP $http _x_forwarded_for; proxy_set_header x-forwarded-server $host; proxy_ set_header x-forwarded-for $http _x_forwarded_for; proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; }}
Cat/usr/local/nginx/conf/ssl/site_passckl:1tfeettlwlf7zgmg
The password that is generated by the authentication password is htpasswd:
If people know your 5601 direct access to do, the simplest, do a port forwarding it
Iptables-t nat-a prerouting-p tcp--dport 5601-j REDIRECT--to-port 80
This article is from the "take a deep Breath again" blog, make sure to keep this source http://ckl893.blog.51cto.com/8827818/1774668
ELK based on business configuration and optimization