Distributed search elasticsearch Java API (7) -- synchronize data with MongoDB

Source: Internet
Author: User

Elasticsearch provides the river module to read data from data sources to elasticsearch. elasticsearch officially provides couchdb synchronization plug-ins. Because the project uses MongoDB, we are looking for a MongoDB synchronization plug-in, elasticsearch-river-MongoDB is found on git.

This plug-in was initially written by Aparo. The initial function was to read the table in MongoDB, record the ID of the last data, and continuously access MongoDB at intervals, check whether there is any data greater than the previously recorded ID. If there is any data, index the data. The disadvantage of this approach is that the latest data can only be synchronized, and the modified or deleted data cannot be synchronized. Later, Richard willy98 and others modified the data to be synchronized by reading the MongoDB oplog. Because MongoDB synchronizes data from different machines in the cluster through the oplog table, this ensures that the data in ES is the same as that in MongoDB, any change in MongoDB data will be reflected in monogodb through oplog. They also added an index for MongoDB.
The file function in gridfs is very good.

However, their modified plug-ins are still somewhat unsatisfactory. He sets the access password of the local library (with oplog) and the common library to the same one. If the user name and password of the local library and the common library are different, the plug-in will not be available. Another reason is that all the fields in the MongoDB table will be synchronized during synchronization, but we do not want to put some fields in the index, so we will modify this plug-in, separate the authentication of the local database from the normal database and add the optional field function.

 

Running Environment: elasticsearch 0.19.x
MongoDB 2.x in Cluster Environment
Note: This plug-in only supports MongoDB in the cluster environment, because MongoDB in the cluster environment only has the oplog table.

Installation Method:
Install the elasticsearch-mapper-Attachments plug-in (used to index files in gridfs)
% Es_home % \ bin \ plugin. Bat-install elasticsearch/elasticsearch-mapper-Attachments/1.4.0

Install elasticsearch-river-MongoDB (synchronization plug-in)
% Es_home % \ bin \ plugin. Bat-install laigood/elasticsearch-river-MongoDB/laigoodv1.0.0

 

Method for creating River:

Curl method:

$ curl -XPUT "localhost:9200/_river/mongodb/_meta" -d '{  type: "mongodb",  mongodb: {     db: "test",     host: "localhost",     port: "27017",     collection: "testdb",    fields:"title,content",    gridfs: "true",    local_db_user: "admin",    local_db_password:"admin",    db_user: "user",    db_password:"password"  },   index: {     name: "test",     type: "type",    bulk_size: "1000",     bulk_timeout: "30"  }}

DB is the name of the synchronized database,
Host MongoDB IP address (localhost by default ),
Port MongoDB port,
Name of the table to be synchronized in the collection
Field names to be synchronized by fields (separated by commas, all by default)
Whether gridfs is a gridfs file (set to true if collection is gridfs)
Local_db_user the User Name of the local database (no need to write)
Local_db_password the password of the local database (no need to write)
Db_user password of the database to be synchronized (no need to write if not)
Db_password password of the database to be synchronized (no need to write)
Name index name (cannot exist before)
Type
Bulk_size: Maximum number of batch Additions
Bulk_timeout

Java API method:

client.prepareIndex("_river", "testriver", "_meta")    .setSource(    jsonBuilder().startObject()    .field("type", "mongodb")    .startObject("mongodb")    .field("host","localhost")    .field("port",27017)    .field("db","testdb")    .field("collection","test")    .field("fields","title,content")    .field("db_user","user")              .field("db_password","password")    .field("local_db_user","admin")          .field("local_db_password","admin")    .endObject()             .startObject("index")     .field("name","test")     .field("type","test")     .field("bulk_size","1000")     .field("bulk_timeout","30")     .endObject()     .endObject()    ).execute().actionGet();

 

Git address of this plug-in: https://github.com/laigood/elasticsearch-river-mongodb

Address: http://blog.csdn.net/laigood12345/article/details/7691068

References: http://www.searchtech.pro/articles/2013/02/18/1361191176552.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.