41 Python distributed crawler build search engine Scrapy explaining-elasticsearch (search engine) basic indexing and documentation crud Operations, add, delete, change, check

Source: Internet
Author: User
Tags kibana

Elasticsearch (search engine) basic index and document CRUD operations

that is, basic indexing and documentation, adding, deleting, changing, checking, manipulating

Note: The following operations are all operating in the Kibana

Elasticsearch (search engine) is based on the HTTP method to operate

GET requests the specified page information, and returns the entity Principal

Post submits data to the specified resource for processing requests, the data is contained in the request body, and the POST request may result in the creation of new resources and/or modification of existing resources

PUT to replace the contents of the specified document with the data sent to the server

Delete Request server deletes the specified page

1. Index initialization, equivalent to creating a database

Create with Kibana

Code description

# Initialize index (that is, create DATABASE) # put index name "" Put Jobbole                             #设置索引名称 {  "settings": {                         #设置    "index": {                          #索引      " Number_of_shards ": 5,             #设置分片数      " Number_of_replicas ": 1            #设置副本数    }}  }" ""

Code

# Initialize index (that is, create DATABASE) # put index name put jobbole                             {"  settings": {"                             index": {                                "number_of_shards": 5,                   " Number_of_replicas ": 1                }}}  

We can also use visual to create an index

Note: Once the index is created, the number of shards cannot be modified and the number of copies can be modified

2. Get the index settings (Setup information)

Get index name/_settings gets the settings (setting information) for the specified index

# Initialize index (that is, create DATABASE) # put index name put jobbole                             {"  settings": {"                             index": {                                "number_of_shards": 5,                   " Number_of_replicas ": 1                }}  #获取指定索引的settings (Setup information) GET jobbole/_settings

Get _all/_settings get settings (set information) for all indexes

# Initialize index (that is, create DATABASE) # put index name put jobbole                             {"  settings": {"                             index": {                                "number_of_shards": 5,                   "Number_ Of_replicas ": 1                }}  #获取索引的settings (Setup information) #GET jobbole/_settings# get settings (Setup information) for all indexes get _all/_settings

Get. Index name, index name/_settings get settings of multiple indexes (Setup information)

# Initialize index (that is, create DATABASE) # put index name put jobbole                             {"  settings": {"                             index": {                                "number_of_shards": 5,                   " Number_of_replicas ": 1                }}  #获取索引的settings (Setup information) #GET jobbole/_settings# get settings (setting information) for all indexes #GET _all/_ Settingsget. kibana,jobbole/_settings

3, update the index settings (Setup information)

PUT Index name/_settings update the settings information for the specified index

# Initialize index (that is, create DATABASE) # put index name put jobbole                             {"  settings": {"                             index": {                                "number_of_shards": 5,                   "Number_ Of_replicas ": 1                }}  #更新指定索引的settings (Setup information) PUT jobbole/_settings{  " Number_of_replicas ": 2}# Gets the index's settings (Setup information) get Jobbole/_settings

4. Get index (index information)

Get _all Get index information for all indexes

# Initialize index (that is, create DATABASE) # put index name put jobbole                             {"  settings": {"                             index": {                                "number_of_shards": 5,                   "Number_ Of_replicas ": 1                }}  #获取索引的settings (Setup information) #GET Jobbole/_settingsget _all

Get index name Gets the specified index information

# Initialize index (that is, create DATABASE) # put index name put jobbole                             {"  settings": {"                             index": {                                "number_of_shards": 5,                   " Number_of_replicas ": 1                }}  #获取索引的settings (Setup information) #GET Jobbole/_settings#get _allget jobbole

5, Save the document (equivalent to the database write data)

PUT Index (index name)/type (equivalent to table name)/1 (equivalent to ID) {field: value} Save document Custom ID ( equivalent to database write data)

#保存文档 (equivalent to database write data) PUT jobbole/job/1{  "title": "Python Distributed crawler Development",  "salary_min": 15000,  "City": "Beijing",  "Company": {    "name": "Baidu",    "company_addr": "Beijing Software Park"  },  "Publish_date": "2017-4-16",  " Comments ": 15}

Visual view

POST Index (index name)/type (equivalent to table name)/{field: value} Save document auto-Generate ID (equivalent to database write data)

Note: The automatic generation of IDs requires the post method

#保存文档 (equivalent to database write data) POST jobbole/job{  "title": "HTML Development",  "salary_min": 15000,  "City": "Shanghai",  "Company ": {"    name ":" Microsoft ",    " company_addr ":" Shanghai Software Park "  },  " Publish_date ":" 2017-4-16 ",  " comments ": 15}

6. Obtaining documents (equivalent to querying data)

Get index name/table name/ID get all the information for the specified document

#获取文档 (equivalent to query data) GET JOBBOLE/JOB/1

Get index name/table name/id?_source get all fields for the specified document

Get index name/table name/id?_source= field name, field name, field name get multiple specified fields for the specified document

Get index name/table name/id?_source= field name gets a specified field for the specified document

#获取指定文档的所有字段GET jobbole/job/1?_source# gets multiple specified fields of the specified document get jobbole/job/1?_source=title,city,company# gets a specified field of the specified document get Jobbole/job/1?_source=title

7, modify the document (equivalent to modify the data)

Modify the document (overwrite it with the way you save the document) the original data is all overwritten

#修改文档 (To modify a document by overwriting it with a document saved) PUT jobbole/job/1{  "title": "Python Distributed crawler Development",  "salary_min": 15000,  "City": "Beijing "Company  ": {    "name": "Baidu",    "company_addr": "Beijing Software Park"  },  "Publish_date": "2017-4-16",  "Comments": 20}

Modify the document (incremental change, unchanged original data) "recommended"

POST Index name/table/id/_update{  "Doc": {    "field": Value,    "field": Value  }}
#修改文档 (incremental modification, unmodified original data unchanged) POST jobbole/job/1/_update{  "Doc": {    "comments": "City    ": "Tianjin"  }}

8. Delete the index, delete the document

Delete index name/table/ID delete a specified document in the index

Delete index name deletes a specified index

#删除索引里的一个指定文档DELETE jobbole/job/1# Delete a specified index delete jobbole

41 Python distributed crawler build search engine Scrapy explaining-elasticsearch (search engine) basic indexing and documentation crud Operations, add, delete, change, check

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.