Solr and. net series (6) solr regular incremental indexing and security, solr.net

Source: Internet
Author: User
Tags solr import database

Solr and. net series (6) solr regular incremental indexing and security, solr.net

 Solr and. net series (6) solr regular incremental indexing and security

The solr incremental index method is an Http request, but such a request obviously cannot meet the requirements. What we need is an automatic incremental index. solr officially provides a timer instance, to complete the incremental index,

First download the apache-solr-dataimportscheduler-1.0.jar,: http://solr-dataimport-scheduler.googlecode.com/files/apache-solr-dataimportscheduler-1.0.jar

The official address sometimes cannot access, please click this http://pan.baidu.com/s/1pJt3KZD

Start configuration below

1. copy apache-solr-dataimportscheduler-1.0.jar to C: \ Program Files \ Apache Software Foundation \ Tomcat 7.0 \ webapps \ solr \ WEB-INF \ lib (C: \ Program Files \ Apache Software Foundation \ Tomcat 7.0 is the tomcat installation path)

2. Modify the web. xml file under the C: \ Program Files \ Apache Software Foundation \ Tomcat 7.0 \ webapps \ solr \ WEB-INF, add

<listener>            <listener-class>                  org.apache.solr.handler.dataimport.scheduler.ApplicationListener            </listener-class>  </listener>

3. Extract dataimport. properties from apache-solr-dataimportschedort-. jar and put it in C: \ Program Files \ Apache Software Foundation \ Tomcat 7.0 \ solr \ conf. Create

4. Restart tomcat.

Dataimport. properties configuration item description

######################################## ########### Dataimport schedort properties ######################### ########################### to sync or not to sync #1-active; anything else-inactivesyncEnabled = 1 # which cores to schedule # in a multi-core environment you can decide which cores you want syncronized # leave empty or comment it out if using single-core priority = game, resourc E # solr server name or IP address # [defaults to localhost if empty] server = localhost # solr server port # [defaults to 80 if empty] port = 8080 # application name/context # [defaults to current ServletContextListener's context (app) name] webapp = solr # URL params [mandatory] # remainder of URLparams =/select? Qt =/dataimport & command = delta-import & clean = false & commit = true # schedule interval # number of minutes between two runs # [defaults to 30 if empty] interval = 1 # interval of redo index, the Unit is minute. The default value is 7200, that is, 1 day. # If it is null, It is 0, or comment out: reBuildIndexInterval = 2 # reBuildIndexParams =/select? Qt =/dataimport & command = full-import & clean = true & commit = true # timing start time of the redo index interval, time of the first actual execution = reBuildIndexBeginTime + reBuildIndexInterval * 60*1000; # two formats: 03:10:00 or 03:10:00, the latter will automatically complete the part of the date reBuildIndexBeginTime = 03:10:00

The above is the original text. # The comment is next to it. Let's translate it.

######################################## ########### Dataimport schedort properties ######################### ########################## syncEnabled = 1 # core of the incremental index to be scheduled, multiple cores are separated by commas (,) in collection1. collection2syncCores = collection1 # needless to say, server address server = 192.168.0.9port = 8080 webapp = solr # command params for incremental index execution =/dataimport? Command = delta-import & clean = false & commit = true # How long is the execution performed? The default unit is minute interval = 30 #. Someone has changed the file, the Newly Added Index is re-built on a regular basis. The original package only contains incremental indexes. The official package does not support the following three statements, you do not need to delete reBuildIndexInterval = 7200 reBuildIndexParams =/dataimport? Command = full-import & clean = true & commit = truereBuildIndexBeginTime = 03:10:00

If you search for other articles, you will see someone saying that the official package has bugs, because the official package is submitted using post, but after testing, the official package can be used normally, the above can be used properly in my project.

If you want to learn how to add timing from index building in the original package, and bugs in the package, please refer to the following article http://www.denghuafeng.com/post-242.html

Okay. After completing the above work, your solr will be able to regularly increment the index,

The following describes solr security issues.

After learning about solr, we all know that solr executes all operations through Http requests. The problem is that if someone else knows the address of your solr server, it is very dangerous, the addition and deletion of solr are also completed through http requests. After addresses are exposed, your data is vulnerable to attacks. the solution here is to set tomcat access permissions, which can only be accessed by a fixed ip address, so that others will not be able to access your solr.

Modify C: \ Program Files \ Apache Software Foundation \ Tomcat 7.0 \ conf \ server. xml and add ip address restrictions.

Global settings, effective for all applications under Tomcat
Add the following line to server. xml and restart the server:
<Valve className = "org. apache. catalina. valves. RemoteAddrValve" allow = "192.168.1. *" deny = ""/> This row is placed before </Host>.

Example:
1. Only access from 192.168.1.10 is allowed:

<Valve className = "org. apache. catalina. valves. RemoteAddrValve" allow = "192.168.1.10" deny = ""/>

2. Only access from 192.168.1. * network segment is allowed: <Valve className = "org. apache. catalina. valves. RemoteAddrValve" allow = "192.168.1. *" deny = ""/>

3. Only access to <ValveClassName = "org. apache. catalina. valves. RemoteAddrValve"Allow = "192.168.1.10, 192.168.1.30" deny = ""/>

4. restrictions based on the host name:

<ValveClassName = "org. apache. catalina. valves. RemoteHostValve"Allow = "abc.com" deny = ""/>

 

 


I am trying to create an index using solr. Now I need to import data from SQL server to solr, but I am reporting an error.

Question added: I still didn't tell me how to index the audio on the first floor. Do we know that Lucene is different? He is just a class library. He just needs to import the two jar files into the project.
 
If you want to import database indexes (with multiple tables) in solr, configure the following (db-data-configxml)

If uniquekey is configured in schema. xml, the unique value is id by default. If the id is the same, update the operation instead of adding

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.