Introduction to SOLR Learning (i)--Basic articles

Source: Internet
Author: User
Tags getmessage solr

1.SOLR is what.

SOLR is an open source, Lucene-based search server that is easy to add to WEB applications. SOLR provides a level search (that is, statistics), hit highlighting, and support for multiple output formats, including formats such as XML/XSLT and JSON. It is easy to install and configure, and comes with an HTTP-based management interface. You can use SOLR's superior performance basic search functionality, or you can extend it to meet your business needs. SOLR features include: Advanced Full-text search capabilities for high throughput network traffic optimization of standard integrated HTML management interface scalability based on open Interface (XML and HTTP)-can be effectively replicated to another SOLR search server use XML configuration to achieve flexibility and adaptability Extensible Plug-in system 2. Download SOLR

Apache Online, the Lucene directory can be found, now updated to the latest version is 6.3, from the 5.x version, SOLR brought the jetty container, you can directly use the corresponding command to start SOLR;

3. Change the Schema.xml file

This file mainly configures the definition field of SOLR search, field type, word breaker, etc.

Schema.xml Partial configuration:

<field name= "_version_" type= "Long" indexed= "true" stored= "true"/> <field "name=" id "type=" string "indexed=" True "stored=" "true" multivalued= "false"/> <field name= "Goodsid" type= "Long" indexed= "true" stored= "true" Multiva Lued= "false"/> <field name= "goodsname" type= "text" indexed= "true" stored= "true"/> <field "name=" L "type=" string "indexed=" true "stored=" true "/> <field name=" Goodsprice "type=" float "indexed=" true "stored=" Tru E "multivalued=" false "/> <field name=" CategoryID "type=" Long "indexed=" true "stored=" true "multivalued=" false "/ > <field name= "CategoryName" type= "string" indexed= "true" stored= "true" multivalued= "false"/> <field nam E= "Brandid" type= "Long" indexed= "true" stored= "true" multivalued= "false"/> <field name= "brandname" type= "string ' indexed= ' true ' stored= ' true ' multivalued= ' false '/> <field name= ' suggest ' type= ' string ' indexed= ' true ' stored= ' True "multivalued= false"/> &Lt;field name= "searchcontent" type= "text" indexed= "true" stored= "true"/> <uniqueKey>id</uniqueKey> <fieldtype name= "int" class= "SOLR". Trieintfield "precisionstep=" 0 "positionincrementgap=" 0 "/> <fieldtype name=" float "class=" SOLR. Triefloatfield "precisionstep=" 0 "positionincrementgap=" 0 "/> <fieldtype name=" Long "class=" SOLR. Trielongfield "precisionstep=" 0 "positionincrementgap=" 0 "/> <fieldtype name=" Double "class=" SOLR. Triedoublefield "precisionstep=" 0 "positionincrementgap=" 0 "/> <fieldtype name=" string "class=" SOLR. Strfield "sortmissinglast=" true "/> <fieldtype name=" text_en "class=" SOLR. TextField "positionincrementgap=" > <analyzer type= "index" > <tokenizer class= "SOLR.  Standardtokenizerfactory "/> <!--in this example, we'll use synonyms at query time <filter Class= "SOLR.
Synonymfilterfactory "synonyms=" Index_synonyms.txt "ignorecase= true" expand= "false"/>        --> <!--case insensitive stop word removal. --> <filter class= "SOLR.
        Stopfilterfactory "ignorecase=" true "words=" Lang/stopwords_en.txt "/> <filter class= "SOLR. Lowercasefilterfactory "/> <filter class=" SOLR. Englishpossessivefilterfactory "/> <filter class=" SOLR.  Keywordmarkerfilterfactory "protected=" Protwords.txt "/> <!--Optionally you could want to use this less aggressive Stemmer instead of porterstemfilterfactory: <filter class= "SOLR. Englishminimalstemfilterfactory "/>--> <filter class=" SOLR. Porterstemfilterfactory "/> </analyzer> <analyzer type=" Query "> <tokenizer class=" Sol R.standardtokenizerfactory "/> <filter class=" SOLR. Synonymfilterfactory "synonyms=" Synonyms.txt "ignorecase=" "true" expand= "true"/> <filter class= "SOLR.
              Stopfilterfactory "  Ignorecase= "true" words= "Lang/stopwords_en.txt"/> <filter class= "SOLR. Lowercasefilterfactory "/> <filter class=" SOLR. Englishpossessivefilterfactory "/> <filter class=" SOLR.  Keywordmarkerfilterfactory "protected=" Protwords.txt "/> <!--Optionally you could want to use this less aggressive Stemmer instead of porterstemfilterfactory: <filter class= "SOLR. Englishminimalstemfilterfactory "/>--> <filter class=" SOLR. Porterstemfilterfactory "/> </analyzer> </fieldType>

File represents the corresponding field in the indexed document, indexed indicates whether the index is indexed, this setting is true for the fields that we need to search for filtering, and the fields that are not indexed are set to false as far as possible;

Stored: A field that indicates that the index will be stored, and it needs to be set to true for fields that need to return to the display after searching.

Configuration of 4.solrconfig.xml

It is mainly the configuration of SOLR itself, such as the configuration of Locks <lockType>${solr.lock.type:native}</lockType>;

Maximum query connection number configuration, query component configuration such as Hightlight, Spellchecker, and so on, this piece of follow-up on the SOLR optimization of the article again detailed;

Solrconfig.xml files can be found directly from the files uncompressed by SOLR;

Configuration as well as starting up on the top of these,

When we use Java to operate, we use SOLRJ:

A. Get the server connection:

Public Httpsolrserver Getserver () {
		try {
			synchronized (goodssolrindexserviceimpl.class) {
				if (server = = NULL) {
					Server = new Httpsolrserver (default_url);//default_url connection address server.setmaxretries for SOLR server
					Server.setconnectiontimeout (60 * 1000); n seconds to
					//establish TCP 1 minutes
					server.setsotimeout (* 1000)//socket read Timeout 1 minutes
					Server.setdefa Ultmaxconnectionsperhost (m);
					Server.setmaxtotalconnections (m);
					Server.setfollowredirects (FALSE); Defaults to False
					server.setallowcompression (true);
		catch (Exception e) {
		        log.error (E.getmessage (), E);
		return server;
B. Add Docment

	Private Boolean Initdbindex () {
		int count = 0;
		try {
			collection<solrinputdocument> docs = new arraylist<solrinputdocument> ();
			Solrinputdocument doc = null;
			doc = new solrinputdocument ();
			Doc.addfield ("id", Uuidgenerator.getuuid ());
			Doc.addfield ("Suggest", "test");
			Doc.addfield ("Content", "test");
			Docs.add (DOC);
			doc = new solrinputdocument ();
			Doc.setfield ("id", "123");
			Doc.addfield ("Goodsid", "123");
			Doc.addfield ("Goodsname", "Iphone7 test the Solr");
			Doc.addfield ("Searchcontent", "Iphone7 test Solr");
			Doc.addfield ("Goodsprice", "6999");
			Doc.addfield ("Scorefield", "the");
			Docs.add (DOC);
			if (docs.size () > 0) {
				server.add (docs);
				Server.commit ();
		catch (Exception e) {
			log.error ("Initdbindex errror:" + e.getmessage ());
			return false;
		return true;
c. Participle of a specified property

Public list<string> getfielddefaultanalysis (string Tokenfield, string content) {
		fieldanalysisrequest Request = new Fieldanalysisrequest ("/analysis/field");
		Request.addfieldname (Tokenfield);//Field name, specify a field that supports Chinese participle
		request.setfieldvalue ("");//field value, can be an empty string, but you need to explicitly specify this parameter
		request.setquery (content);

		Fieldanalysisresponse response = null;
		try {
			response = request.process (server);
		} catch (Exception e) {
			e.printstacktrace ();
		list<string> results = new arraylist<string> ();
		Iterator<analysisphase> it = response.getfieldnameanalysis (Tokenfield). Getqueryphases (). Iterator ();
		while (It.hasnext ()) {
			Analysisphase pharse = (analysisphase) ();
			list<tokeninfo> list = Pharse.gettokens ();
			for (Tokeninfo info:list) {
				results.add (Info.gettext ());

		return results;

The above method, which is used when adding a document, is called, and we use a goodsname for the word, and then add the Suggest module.

D. Queries and analysis of corresponding fields

public string Search (string keywords) {searchresult result = new SearchResult ();
		Httpsolrserver server = null;
			try {server = Getserver ();

			Solrquery query = new Solrquery ();
			Query.set ("Q", "searchcontent:" + keywords);
			Query.set ("spellcheck.q", keywords); Query.set ("Qt", "/spell");/request to spell Query.set ("QF", "searchcontent");//query field Query.set ("FL", "Id,goodsid,goodsname
			, Goodsprice,score ");/return field query.setstart (0);
			Query.setrows (20);
			Queryresponse RP = server.query (query);
			Solrdocumentlist doclist = Rp.getresults ();
			list<product> products = new arraylist<product> ();  if (doclist!= null && doclist.getnumfound () > 0) {for (solrdocument doc:doclist) {Product product
					= new Product ();
					Product.setgoodsid (Doc.getfieldvalue ("Goodsid"). toString ());
					String goodsname = (string) doc.getfieldvalue ("Goodsname"); Product.setgoodsname (goodsname = null?)
					"": goodsname.tostring ()); if (Doc.getfieldvaLue ("Goodsprice")!= null) {Product.setgoodsprice (Doc.getfieldvalue ("Goodsprice"). toString ());
				} products.add (product);
		A catch (Exception e) {log.error (E.getmessage (), E); }
Q: Query string;

Qt:query type, which specifies the query Handler to use for queries;

Fq:filter query uses filter query to make full use of filter query Cache to improve retrieval performance, similar to filter search;

Fl:filed list Specifies the return result field, separated by a space or comma;

Start: Start record value, mainly used for paging;

Rows: Returns the number per page, default 10;

Sort: Ordering, for example: Price desc descending;

DF: Default Query field;

Deftype: Set the query parser name;

Note: The above code is pseudo code, is deducted from the project part of the code, so only for your reference,

Here is the basis of SOLR basically this, I am based on the project used in the extraction of records to explain, may not be particularly comprehensive, because some things can be seen through the document here is a blog link I wrote before, you can combine to see

SOLR basic Configuration, SOLR Java primary application

For your reference.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.