The Linux environment for installing Elk is CentOS 7, and the JDK version used is 1.8.0_144
The elk version used for installation is 5.5.1
First install Elasticsearch 5.5.1, download elasticsearch-5.5.1.tar.gz from the official website after decompression, in the bin directory as root directly run the identity of the user
./elasticsearch
Throws Exception information
Java.lang.RuntimeException:can not run Elasticsearch as root
Warning cannot start elasticsearch with root user.
I created a new Elk groups and elk users, according to the online prompts Elasticsearch extract directory owner to elk users, the results do not work, when the user switch from root to elk after the users can not switch to the decompression directory, prompted permission is not allowed.
You also need to give elk users the ability to perform read and write permissions to the directory before starting the Elasticsearch application.
sudo chmod-r 755/opt/elk/elasticsearch-5.5.1
After you switch to the Elk user, you can start the Elasticsearch, but the startup log shows the startup check failed and the startup is unsuccessful.
Check for error messages with two articles
[1]: Max file descriptors [4096] for elasticsearch process are too low, increase to at least [65536]
[2]: Max virtual m Emory areas Vm.max_map_count [65530] is too low, increase to at least [262144]
The first error check indicates that the default value of the maximum file descriptor for the Elk User boot Elasticsearch is 4096 too small and needs to be adjusted to 65536.
We modify the/etc/security/limits.conf file and add it at the end
Elk Soft Nofile 65536
Elk hard Nofile 65536
The elk here is the username that starts the Elasticsearch, the same as
The second error is that the virtual memory parameter is too small to execute directly
Sysctl-w vm.max_map_count=655360
Directive sets the Vm.max_map_count parameter value to 655360
You can also modify the/etc/sysctl.conf file directly
At the end of the file add
vm.max_map_count=655360
Then execute the sysctl-p command.
After starting Elasticsearch, you cannot access port 9200 from remote, you need to modify the Elasticsearch.yml file and set Network.host to 0.0.0.0
# Set the bind address to a specific IP (IPV4 or IPV6):
#
network.host:0.0.0.0
#
Set a custom port for HTTP:
#
#http. port:9200
At the same time, open port 9200 on the CentOS firewall.
After restarting the Elasticsearch, execute the Curl command from the remote machine
Curl http://xxx.xxx.xxx.xxx:9200
{
"name": "qm1a6g-",
"cluster_name": "Elasticsearch",
"Cluster_ UUID ":" LBAC52_EQZGL6AIURCDZMG ","
version ": {
" number ":" 5.5.1 ",
" Build_hash ":" 19c13d0 ",
" build_ Date ":" 2017-07-18t20:44:24.823z ","
build_snapshot ": false,
" lucene_version ":" 6.6.0 "
},
" Tagline ":" For your Know, for Search "
}
That is, you can get Elasticsearch server information from a remote server.
If you set Bootstrap.memory_lock to true in the Elasticsearch.yml file, you are guaranteed to have enough memory at boot time to check for failure at startup
[1]: Memory locking requested for Elasticsearch process but memory are not locked
You need to modify the/etc/security/limits.conf file and add the following settings
Elk Soft Memlock Unlimited elk hard Memlock Unlimited
Set the maximum data size that the process can lock in memory
Then modify the/etc/sysctl.conf file and add
Vm.swappiness=0
Indicates maximum use of system memory
Set up after the reboot system, and then restart Elasticsearch found that the problem still exists, the need to perform ulimit-l unlimited instructions to start successfully, but this instruction in the CentOS reboot after the failure, and finally I modified the
The/etc/security/limits.conf file is in the following form
* Soft Nofile 65536
* Hard nofile 65536
* Soft Memlock Unlimited
* Hard Memlock Unlimited
Perform nofile and memlock settings for all domains, reboot the system, and reboot Elastichsearch no longer have memlock check problems.
Starting from Elasticsearch 5.0, Elasticsearch no longer supports site plugin, so the head plugin that queries the ES can no longer be installed through the Elasticsearch-plugin command, You can point to the HTTP port of the Elasticsearch server by installing the Elasticsearch-head plug-in for the Chrome browser or by running the Elasticsearch-head server alone.
When I installed the head plugin, I referenced the following article
Http://www.cnblogs.com/xing901022/p/6030296.html
The Nodejs version used for the 6.11.2,head project requires Nodejs plug-ins that can be installed via NPM or yarn commands, and the yarn image of NPM and registry is recommended for Taobao mirroring due to the wall
NPM--registry https://registry.npm.taobao.org info underscore
yarn Config set registry https:// registry.npm.taobao.org
When using NPM to install packages, it is recommended that you use NPM install-g Global installation, as mentioned in other articles.
The following error occurs when the last NPM install or yarn Install command is executed in the head directory
NPM err! phantomjs-prebuilt@2.1.14 Install: ' Node Install.js '
You need to install phantomjs-prebuilt separately and execute the command
NPM Install phantomjs-prebuilt@2.1.14--ignore-scripts
Refer to the
http://blog.csdn.net/txl910514/article/details/55135734 this article
After you use Grunt server to start the head server, you can query es directly from a remote browser.