No. 371, Python distributed crawler build search engine Scrapy explaining-elasticsearch (search engine) with Django implementation of my search and popular
The simple implementation principle of my search elements
We can use JS to achieve, first use JS to get the input of the search term
Set an array to store search terms,
Determine if the search term exists in the array if the original word is deleted, re-place the new word at the front of the array
If it does not exist, simply place the new word in front of the array, and the loop array displays the result
Popular Search
Implementation principle, when the user searches for a word, you can save to the database, and then record the number of searches,
Use the most searched words of the Redis cache to update the cache over time
Note: Django combines Scrapy's Open source project to learn
Django-dynamic-scraper
Https://github.com/holgerd77/django-dynamic-scraper
Add
The default elasticsearch (search engine) can only search 10,000 data, in the big will be an error
Setup method
Step One:
Open the Index library address of the project, close the index first, or set the step two cannot commit
Step Two:
Open the compound query, fill in the following information, remember to choose put to submit, Credit_trace_data to the index in the index library, Max_result_window set to 2 billion, this value is an integer type, not infinitely large
Http://127.0.0.1:9200/PUT
Credit_trace_data/_settings?preserve_existing=true
{
"Max_result_window": "2000000000"
}
Finally click Submit request, if configured correctly the right window will display the following information
If you want to query Max_result_window, you only need to change the put to get
Finally, remember to open the index!
50 python distributed crawler build search engine Scrapy explaining-elasticsearch (search engine) using Django to implement my search and popular search