Lucene-based case development: the first knowledge of the case, the first knowledge of lucene

Source: Internet
Author: User

Lucene-based case development: the first knowledge of the case, the first knowledge of lucene
Reprinted please indicate the source: http://blog.csdn.net/xiaojimanman/article/details/43192055


Sorry, the overall framework design of the case has been prepared in the past few days, so the update is interrupted for several days. Please forgive me.



Case Study
Before we start the formal case development Introduction, let's take a look at the overall case demo introduction to understand what the case is.

From this, we can see that this case is mainly to collect the resources on the aspect Novels Through crawlers, and then store the resources in their own databases, create an index file for the data to be retrieved in the database through lucene, and then display the data through the web service. In this process, we need to write crawler (Collection Program), Background interface (database search & Lucene search), and web Front-end display. The following describes the technologies used in these three sections.


Web Front-end

The web Front-end will design the front-end interface based on the BootStrap framework, and interact with the background data through JavaScript. Through preliminary design, the front-end mainly includes four interfaces: homepage (used for Operation promotion), book list page (used for keyword, Tag, category, and other retrieval result display), Overview page, and reading page, the four pages are as follows (these four interfaces are just simple sketches ):


The homepage displays some operation or promotion data, which is compiled by the operations staff.


The list page is used for displaying search results such as book keywords, categories, tags, authors, and statuses.


The overview page displays the attributes of a book and the list of chapters.


The reading page displays the content of a specific chapter.



Search backend
The search background will mainly perform Information Retrieval Based on lucene, and the database will use mysql. The search backend provides the data interfaces required for web Front-end display.

Crawler
Crawlers will use HttpClient to simulate browser behavior and collect the content of vertices (free novels ).

This blog is mainly a brief introduction to the overall case, knowing what the case is, and not knowing what you are doing in the blog.

Note:Before introducing lucene search back-end, we will focus on the tools used in the search background in several blogs. Although some classes have been introduced in the previous blog, we will introduce these classes again here to avoid further coding, you cannot find some methods or you do not know what the methods are.


Ps: I recently found that other websites may repost the blog, and there is no source link above. If you want to view it

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.