49 Python distributed crawler build search engine Scrapy explaining-elasticsearch (search engine) implement search results pagination with Django

Source: Internet
Author: User
Tags lenovo

Logical processing functions

Calculate Search Time-consuming
Before starting the search: Start_time = DateTime.Now () Gets the current time
At the end of the search: End_time = DateTime.Now () Gets the current time
Last_time = (end_time-start_time). Total_seconds () end time minus start time equals times, converted to seconds

From django.shortcuts import render# Create your views here.from django.shortcuts import Render,httpresponsefrom django.v Iews.generic.base Import viewfrom app1.models import lagoutype # Importing Operation Elasticsearch (search engine) class import Jsonfrom ELA Sticsearch Import Elasticsearch # Importing native Elasticsearch (search engine) interface client = Elasticsearch (hosts=["127.0.0.1"]) # Connection native Elasticsearchfrom datetime import Datetimedef Indexluoji (Request): Print (Request.method) # Gets the path of the user request return render (Request, ' index.html ') def Suggestluoji (Request): # Search Auto-complete logic processing key_words = Request .                                  Get.get (' s ', ') # gets to the request word Re_datas = [] If key_words:s = Lagoutype.search ()            # Instantiation of search query for Elasticsearch (search engine) class S = s.suggest (' my_suggest ', Key_words, completion={ "Field": "Suggest", "fuzzy": {"fuzziness": 1}, "Size": 5}) su Ggestions = S.execute_suGgest () for match in Suggestions.my_suggest[0].options:source = Match._source Re_datas.appen D (source["title"]) return HttpResponse (Json.dumps (Re_datas), content_type= "Application/json") def Searchluoji ( Request): # search Logic processing key_words = Request. Get.get (' Q ', ') # Gets the requested Word page = Request. Get.get (' P ', ' 1 ') # gets access page number try:page = Int (page) Except:page = 1 Start_                                   Time = DateTime.Now () # Gets the current times response = Client.search (                                          # Native Elasticsearch Interface Search () method, that is, can support the native Elasticsearch statement query index= "Lagou",                                                  # Set index name doc_type= "Biao", # Set table name body={                               # write Elasticsearch statement "query": {"Multi_match": { # multi_match Query "query": key_words, # query keyword "fields": ["Tit                                          Le "," description "] # query Field}}," from ": (page-1) *10, # get "Size" from the first few: 10, # Get how many data "Highligh         T ": {# query keyword highlighting processing" pre_tags ": [' <span class=" KeyWord ">"],                                     # Highlight start Tag "post_tags": [' </span> '], # Highlight end tag ' fields ': {                    # Highlight Set "title": {}, # Highlight field "description": {} # Highlight field}}) End_time = DateTime. Now () # Gets the current time Last_time = (end_time-start_time). Total_seconds () # end time minus start Time equals times, converted into seconds total_nums = response["hits" ["Total"] # Gets the overall number of results of the query if (page%) > 0:                                               # count Pages paga_nums = Int (TOTAL_NUMS/10) +1 else:paga_nums = Int (TOTAL_NUMS/10) hit_list = []                        # set a list to store the information you have searched for, and return it to the HTML page for hits in response["hits" ["Hits"]:  # Loop Query to result hit_dict = {} # Set a dictionary to store the loop result if "title" in hit["Highlight"]: # determines the title field if the highlighted field has a class tolerance hit_dict["title" = "". Join (hit["highlight"] [" Title "]) # Gets the highlight of the title else:hit_dict[" title "] = hit[" _source "[" title "] # otherwise gets not highlighted The title if "description" in hit["highlight"]: # Determines the Description field if the highlighted field has a class tolerance Hit_ dict["description"] = "". Join (hit["Highlight" ["description"]) [: 500] # Get the highlighted description Else:hit_di ct["description"] = hit["_source" ["description"] # Otherwise get not highlighted in the description hit_dict["url"] = hit["_source" ["url"] # Gets the return URL hit_list.append (hit_dict) # will get to the contents of the dictionary, added to the list return render (req Uest, ' result.html ', {"page": page, # current Page "Total_nums": Total_nu                                           MS, # Data total number of "all_hits": hit_list, # Data list "Key_words": key_words, # Search word "paga_nums": paga_nums, # page                                           Number "Last_time": Last_time # Search Time }) # Displays the page and returns the list and search terms to the HTML

Html

<! DOCTYPE html >

Results:

49 Python distributed crawler build search engine Scrapy explaining-elasticsearch (search engine) implement search results pagination with Django

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.