Preparatory work
Download solr and Tomcat
Solr-4.8.1.tgz,apache-tomcat-7.0.54.tar.gz
Open/home/cluster, unzip
Tar zxvf apache-tomcat-7.0.54.tar.gzTar zxvf solr-4.8.1.tgzStart the installation configurationSOLR
Establish Solrhome
Mkdir-p/home/cluster/solrhome
Copy the Solr
Building HBase Two indexes using SOLR
@ (hbase) [HBase, SOLR]
Building an HBase two-level index using SOLR Overview A Business Scenario Description Two technical Scenario 1 technical programme 12 Technical programme 23 recommendations on indexes two use hbase-indexer to build hbase two index one installation environment preparation two configuration
This period of time integration HBase, need to establish two index for hbase, convenient data query use, SOLR authoritative guide has HBase and SOLR integration chapters, follow the book and the instructions on the web is very close to the configuration success, HBase Indexer has not been updated for more than 1 years, Integrated with the latest hbase1.2.6,solr7.2.1 there are a lot of related interfaces tha
Environment: JDK 1.7 Solr 5.3.0 Tomcat 7 mmseg4j-solr-2.3.0 1. SOLR Environment Construction 1. Unzip the SOLR 5.3.0 2. Create a new Solr_home and copy the Server/solr folder from the extracted file to the Solr_home 3. Configure Solr_home. Create a new application in SOL
Describe:Implementing high-speed full-text indexing in Linux environmentFirst, the current environment:CentOS (Linux) 6.3 bitSecond, the required software1. Java JDK2. SOLR's latest stable version Solr-4.53. Tomcat latest stable version Tomcat-7.0.424, IK Analyzer the latest stable version of the word breaker IKAnalyzer2012Third, Tomcat installation1. Installing the JDKYum-y Install JAVA-1.6.0-OPENJDK Java-1.6.0-openjdk-devel2. Download TomcatHttp://m
SOLR is a lucene-based Java search engine server. SOLR provides level search, hit highlighting, and supports multiple output formats (including XML/XSLT and JSON formats). It is easy to install and configure, and comes with an HTTP-based management interface. SOLR has been used in a number of large sites, more mature and stable.
SOLR is an open-source Enterprise Search Service Based on luncene. It provides a package and ready-to-use solution [using luncene integration requires processing index management, analyzer, and other issues, self-implementation is still relatively troublesome] SOLR provides a lot of auxiliary functions for external HTTP services. The core is integrating with luncene.
Lucid imaginationIs the first known ci
Post address: http://aixiangct.blog.163.com/blog/static/9152246120111128114423633/
Significance of SOLR multicore
SOLR multicore is SOLR 1.3. The target SOLR instance can have multiple search applications.
We can put different types of data in the same index, or use separate Multiple indexes. Based on this, you only n
The software and version used in this article: Build Environment: Windows 7 x64 SOLR: solr-4.8.0 Java SDK: jdk-7u55-windows-x64 Tomcat: apache-tomcat-7.0.53-windows-x64 Step 1: Install the Java SDK Go to the Java official website and download JDK 7u55. Http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html The downloaded file is jdk-7u55-windows-x64.exe. Double-click t
(a) HIVE+SOLR profileAs the offline data warehouse of the Hadoop ecosystem, hive can easily use SQL to analyze the huge amount of historical data offline, and according to the analysis results, to do some other things, such as report statistics query.SOLR, as a high-performance search server, provides fast, powerful, full-text retrieval capabilities.(b) Why is hive integration SOLR required?Sometimes, we ne
Concept:Apache SOLR is an open-source search server. SOLR is developed using the Java language, mainly based on HTTP and Apache Lucene implementations. The resources stored in Apache SOLR are stored as objects in Document. Each document consists of a series of field, and each field represents a property of the resource. Each Document in
Introduction
SOLR is a stand-alone enterprise Search application server that provides API interfaces similar to Web-service.
The user can submit a certain format XML or JSON file to the SOLR server through the POST request of HTTP, and after the SOLR server parses the file, the index library is deleted and changed according to the specific requirements.The user
uses Elasticsearch to process guest logs so that the public can respond in real time to editorial responses to different articles.
StackOverflow combines full-text search with geolocation and related information to provide more-like-this-related issues.
GitHub uses Elasticsearch to retrieve more than 130 billion lines of code.
Every day, Goldman Sachs uses it to process the index of 5TB data, and many investment banks use it to analyze stock market movements.
But Elasticsearch
SOLR Article Integration
SOLR principle
Solrcloud introduction of distributed full-text retrieval system http://my.oschina.net/004/blog/175768
constructs an index word breaker .
SOLR's Chinese participle. http://blog.csdn.net/zhu_tianwei/article/details/46711511
query participle.
SOLR uses custom Query Pars
1. Download The SOLR, mmseg4j word segmentation package, tomcat, and decompress the package, which can be searched by Google or Baidu.
2. To use Chinese word segmentation, you must set the encoding, enter the tomcat installation directory, and use VI to modify the confserver. xml configuration.
Added uriencoding = "UTF-8" to set encoding to UTF-8.
3. Copy APACHE-SOLR-*. war in the Dist folder under the do
http://apache.fayea.com/lucene/solr/5.5.3/Solr-5.5.3.tgz After the download is successfulReference official documentationHttps://cwiki.apache.org/confluence/display/solr/Running+SolrGo to Bin directory/users/yaoyao/downloads/solr-5.5.3/bin? Bin./solr StartStart successfull
How to continue using SOLR in your project
Share today, users in the new, modified article is, using AOP to update the data in SOLR, the original logic of the article and the SOLR logic decoupling
If there's no use of AOP,SOLR, it might be.
In this way, the logic of the article itself is tightly coupled to
Prerequisite: SOLR, Solrcloud provides a complete set of data retrieval scheme, HBase provides a perfect large data storage mechanism.
Requirements: 1. For the structured data added to the hbase, it can be retrieved.
2, the data volume is big, achieves 1 billion, 10 billion data quantity.
3, the retrieval of real-time requirements of higher, second-level update.
Description: The following is a system architecture that is built together using
1 Ikanalyzer word breaker configuration.
1.1 Copy Ikanalyzer2012_u6\ikanalyzer2012_u6.jar to C:\apache-tomcat-6.0.32\webapps\
Under the Solr\web-inf\lib folder
1.2 Create a new classes folder under the C:\apache-tomcat-6.0.32\webapps\solr\WEB-INF folder, copy the Ikanalyzer2012_u6\ IKAnalyzer.cfg.xml and Ikanalyzer2012_u6\stopword.dic to classes folder, modify IKAnalyzer.cfg.xml, add
Classes under the new
Beginners, if there is anything wrong, please advise.SOLR is generally used under Linux, but for beginners, there are some problems with Linux. So, let's practice practiced hand in Windows first. SOLR is written in Java, so it can be run either on Linux or in Windows. The configuration process is similar, you can refer to each other.Required Documents and Environment:JDK 1.7+, and configure environment variablesTomcatSolr1. Download the
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.