How the SOLR cluster updates configuration

Source: Internet
Author: User
Tags solr zookeeper

The configuration files in the SOLR cluster are frequently updated, with the highest frequency of the two profiles, Schema.xml and Solrconfig.xml, and before updating the configuration file, let's look at the cluster project structure

Since configuration files such as Solrconfig.xml and Schema.xml are managed by the Zookeeper cluster in cluster mode, only a single Shard Shard is kept in the local project, and each collections Shard is saved in the server in the SOLR root directory /solr/, this directory has a solr.xml file, and then you can enter a shard directory, you can see only a data directory and a core.properties file, where the data directory holds the SOLR backup index data, Core.properties file through Vim compilation can be seen here to save Shard serial number, collections name, nuclear node name and so on; The official documentation also gives the corresponding Solrcloud local directory structure:

<solr-home-directory>/
Solr.xml
core_name1/
Core.properties
data/
core_name2/
Core.properties
data/

This structure is the same as what we see.

According to the above directory structure, so we update the configuration must update the configuration file to the zookeeper cluster to be effective, before the online check some information, many are invalid or outdated, a way to use a period of time is to first login to zookeeper, Delete the original Schema.xml file, and then configure the upload, upload completed is not immediately effective, so to restart the SOLR cluster in turn, first of all, this method is effective, but it is too cumbersome, and because in the production process, the random restart of the SOLR service and restart the server will cause a certain loss or risk, so This is obviously unreasonable, although we have been using it, and the best solution is actually given in the official documentation, and this needs to be explored by ourselves, and here's the workaround:

For example, after we modify the Schema.xml configuration file, do not have to login zookeeper delete the original file, the file will be automatically overwritten, here directly upload, command as follows:

./server/scripts/cloud-scripts/zkcli. sh -zkhost localhost:2181 -cmd putfile/solr/configs/my_config/schema.xml. /configs/conf/schema.xml

Parameter putfile specifies the absolute path of the configuration file on zookeeper, which is noted here is the path of the configuration, not the path of the collection, the names are different, followed by the path of the local schema.xml configuration file after the modification

After uploading the file, we can see the file content updated through the SOLR admin interface, there are 2 ways to find out

  

  

But we go to update, query error, browse the schema field is not updated, it is obvious that the configuration does not take effect; Next is the most important step, we find the official document of the Collections API location, we can see that there is a following API:

  

Reload this API is to reload collection, do not restart SOLR, click on the anchor point can see the specific use explained below:

  

You can see that the explanation is clear, after the previous update has overwritten the configuration file, we only need curl or access the following link:

http://localhost:8983/solr/admin/collections?action=reload&name=my_collection

After the visit, return to success, reload the core successfully, all normal use

This process of updating the collections configuration file is also a process of solving the problem, first of all we should fully understand the tools we use or the framework of the specific what to do or what aspect is advantageous, and the deeper realization mechanism and principle, more to compare, This problem can only be realized when the problem is not at a loss, and some places are unreasonable, why unreasonable, will give the production of bad effects and potential risks; then search with these questions, suggest to view official documents, In particular, the big data directly in search engine results are copied, outdated, or even wrong, this time to fully taste the official documents in the interpretation, and then go to their own thinking, so slowly their own knowledge of sound, slowly their own internal strength also improved, Relatively speaking also slowly into the ranks of the master, for beginners at a loss is very normal, personally think it is important to stick to the study of perseverance, only in this way can follow the progress of technology

Finally, in the study work, summed up a sentence is: Never give up, know its why, teach people to fish, rather than grant to the fishing

How the SOLR cluster updates configuration

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.