Elasticsearch1.7.3 upgrade to 2.4.2 record

Source: Internet
Author: User
Tags kibana logstash

We use elk to do log analysis system, Elasticsearch1.7.3 run for nearly a year, has recently upgraded a cluster to ES5.1.1, but encountered more problems. So upgrading another cluster to the community recommended a more stable 2.4.2. In order to facilitate the management of the upgrade, the operation is implemented uniformly using ansible.


One: Stop Monit daemon

#集群的所有logstash, es processes are monitored by the monit, first stop monitoring guard. Interested monit can see me another article, "Using M/monit for visual centralized process management"

$ ansible elksjs-m shell-a '/opt/monit/bin/monit-c/opt/monit/conf/monitrc unmonitor all '


Two: Stop es cluster write

#由于前端顶了kafka集群, so the backend stops writing, and the data accumulates in the Kafka. Continue to consume after the cluster is started. Data is not lost.

$ ansible elksjs-m shell-a '/etc/init.d/logstash start '


Three: After stopping Logstash write, synchronize copy COMMITD

#和linux命令sync的类似, the in-memory data is brushed to disk before the outage.

$ curl-xpost localhost:9200/_flush/synced


Four: Prohibit partition allocation before shutdown

#禁止分片分配, prevent the cluster from starting, some nodes do not join in time, resulting in the data in the cluster allocation of balance, increase load. You should wait for all nodes to join and then turn on shard allocation.

$ curl-xput localhost:9200/_cluster/settings-d ' {"transient": {"cluster.routing.allocation.enable": "None"}} '


Five: Stop es

#停止所有es的节点.

$ ansible elksjs-m shell-a '/etc/init.d/elasticsearch stop '


Six: Uninstall es old version

#卸载所有es的安装包

$ ansible elksjs-m shell-a ' rpm-e elasticsearch-1.7.3-1 '


Seven: Install a new package

#安装新的es2.4.2 Installation Package

Ansible elksjs-m shell-a ' wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/ Rpm/elasticsearch/2.4.2/elasticsearch-2.4.2.rpm-p/opt ' ansible elksjs-m shell-a ' rpm-iv/opt/ elasticsearch-2.4.2.rpm '


Eight: Restore configuration files and startup files

#做这一步的前提是本次升级配置文件没有变化, the configuration of 1.7.3 and 2.4.2 has changed a little, my configuration is suitable for 2.4.2 version, so directly with the original configuration. Then do the optimization and adjustment later. If there is a change, please update the configuration file.

$ ansible elksjs-m shell-a ' cd/etc/init.d/&&rm elasticsearch && mv Elasticsearch.rpmsave Elasticsearch ‘
$ ansible elksjs-m shell-a ' cd/etc/elasticsearch/&& rm-rf elasticearch.yml &&mv elasticsearch.yml.rpm Save Elasticsearch.yml '


IX: Modify the Data directory owner

#由于卸载es安装包的时候也删除了es用户, and new ES users, so to re-to the ES of the data directory owner changed to Elasticsearch.

$ ansible elksjs-m shell-a ' chown-r elasticsearch.elasticsearch/data/elk/es ' $ ansible elksjs-m shell-a ' chown-r ELA Sticsearch.elasticsearch/data/es '

Ten: Start Elasticsearch

#启动es进程, this step without error will be all right, in fact, not ... Experienced a number of errors, many times back to the old version, adjusted to finally upgrade successfully.

Ansible elksjs-m shell-a '/etc/init.d/elasticsearch start '


11: Check whether the cluster is healthy

#集群启动后, check to see if cluster nodes are clustered and whether the cluster is healthy. In fact, my five master boot automatically joins the cluster, but after the data node is upgraded, it basically does the index upgrade operation. es2.x and es1.x have different storage strategies for multiple directory index paths. You need to move all the data again. The waiting time is very long.

$ Curl localhost:9200/_cat/health?v$ Curl Localhost:9200/_cat/nodes?v


12: Start shard allocation after cluster startup

#等所有节点都加入集群后, you can turn on shard allocation

Curl-xput localhost:9200/_cluster/settings-d ' {"transient": {"cluster.routing.allocation.enable": "All"}} '


13: Download new version head and Kopf plugin

#之前1. X with the head plug-in and kopf-1.5 found in the es2.4.2 are not normal display, had to unload load, installed a new version.

$ wget https://codeload.github.com/mobz/elasticsearch-head/zip/master$ wget https://codeload.github.com/lmenezes/ elasticsearch-kopf/tar.gz/v2.1.2$ Tar XF elasticsearch-kopf-2.1.2.tar.gz$ unzip elasticsearch-head-master.zip$ MV elasticsearch-kopf-2.1.2/usr/share/elasticsearch/plugins/kopf$ MV Elasticsearch-head-master/usr/share/ Elasticsearch/plugins/head



14: Update Kibana

#由于之前使用的是ES1. X, KIBANA3 is no longer supported in 2.x, but because of the large number of index pages on Kibana3, as well as the long-time user habits, you want to use KIBANA3. There are classmates in the community changed the code of KIBANA3, supporting the es2. X. So you can use the Kibana3 happily again.

$ wget https://codeload.github.com/heqin5136/kibana3-with-es2/zip/master# the KIBANA3-WITH-ES2/SRC as a web directory


# Kibana4 used before 4.1.4 but also error after startup. Use version 4.6 ok

$ wget https://download.elastic.co/kibana/kibana/kibana-4.6.0-x86_64.rpm


Issues encountered during the upgrade


(1): when setting bootstrap.mlockall:true, start es alarm Unknown mlockall error 0.

WORKAROUND: Set the lock memory size to Unlimited, Linux command: Ulimit-l Unlimited



(2): after upgrading, start ES error failed to created node environment, because after uninstalling ES package, es user is deleted. The newly installed ES package creates a new ES user. The original data directory belongs to the owner or the original ID. So the new ES user does not have permission to read the data. Causes a failure to start.

[2016-07-24 19:19:16,280] [ERROR] [Bootstrap] Exception

Org.elasticsearch.ElasticsearchIllegalStateException:Failed to created node environment

Data

Workaround: Chown-r elasticsearch.elasticsearch/data/elk/es


(3): The new field is not allowed to have. The presence of a large number of random fields that were previously generated due to the use of KV random matches, many of which contain., so cannot be upgraded

Starting elasticsearch:exception in thread "main" java.lang.IllegalStateException:unable-upgrade the mappings for the index [logstash-adn-2016.07.02], Reason: [Field name [Adn_tabarticle.articleid] cannot contain '. ']

Likely root Cause:mapperparsingexception[field name [Adn_tabarticle.articleid] cannot contain '. ']

Atorg.elasticsearch.index.mapper.object.objectmapper$typeparser.parseproperties (ObjectMapper.java:278)

Workaround: Unregister the KV cut field and wait for the old index to expire before upgrading.


(4): different field types cause ES mapping error. Before due to the output plug-in judgment is not rigorous, resulting in packetbeat part of the data written to the Logstash index, the Logstash index in the Port field has a number and string two types, resulting in a conflict, the ES cannot be upgraded.

Unable to upgrade the mappings for the index [logstash-2016.12.12], Reason: [mapper [port] cannot is changed from type [st Ring] to [long]].

Workaround: Pay attention to the new write Logstash judgment output, waiting for the conflict index to expire.


(5): After ES is started, the data node is indexed and upgraded, but it is time-consuming to find that many of the old indexes that have been deleted a few months ago are also being upgraded.

Workaround: Remove the useless index directory from the data directory.


(6):kibana:this version of Kibana requires Elasticsearch ^1.4.4 on all nodes.

I found the following incompatible nodes in your cluster:elasticsearch v2.4.2 @ undefined

Workaround: Kiabna version 4.1.4 does not match es2.4.2. Update to 4.6.1 normal use.


(7)kibana4.6.1 error Courier Fetch error:unhandled Courier request Error:authorization Exception

650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/> Workaround: Comment out http.cors.enabled:




This article is from the "Sauce paste" blog, please make sure to keep this source http://heqin.blog.51cto.com/8931355/1886175

Elasticsearch1.7.3 upgrade to 2.4.2 record

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.