Bulk batch Import
Batch import can merge multiple operations, such as index, delete, update, and create. It can also help to import data from one index to another.
The syntax is roughly as follows;
action_and_meta_data\noptional_source\naction_and_meta_data\noptional_source\n....action_and_meta_data\noptional_source\n
Note that each piece of data is composed of two rows (except delete). Other commands, such as index and create, are composed of metadata rows and data rows, update is special. Its data row may be Doc, upsert, or script. If you are not familiar with it, you can refer to the previous update translation.
Note that each line ends with the \ n carriage return. Therefore, if you define JSON, do not use carriage return. Otherwise, the _ bulk command will report an error!
A small example
For example, we now have a file named data. JSON:
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }{ "field1" : "value1" }
The first line defines information such as _ index, _ type, and _ id. The second line defines the field information.
Then run the following command:
curl -XPOST localhost:9200/_bulk --data-binary @data.json
You can see the imported data.
For other operations such as index, delete, create, and update, refer to the following format:
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }{ "field1" : "value1" }{ "delete" : { "_index" : "test", "_type" : "type1", "_id" : "2" } }{ "create" : { "_index" : "test", "_type" : "type1", "_id" : "3" } }{ "field1" : "value3" }{ "update" : {"_id" : "1", "_type" : "type1", "_index" : "index1"} }{ "doc" : {"field2" : "value2"} }
Set the default index and type in the URL
If index or type is set in the path, you do not need to set it in JSON. If it is set in JSON, the configuration in the path is overwritten.
For example, in the above example, the file defines the index as test, the type is type1, And we define the default option in the path, the index is test333, the type is type333. After the command is executed, the configuration in the file overwrites the configuration in the path. This also provides unified default configurations and personalized special configurations.
Others
Because bulk submits a lot of commands at a time, it will send the data to a node, and then the node will parse the metadata (index, type, or ID ), distribute the parts to other nodes for operations.
Since many commands return results in a uniform manner after execution, the data volume may be large. In this case, if Chunk encoding is used for multipart transmission, it may cause a certain delay. Therefore, the condition is buffered on the client. Although bulk provides a batch processing method, it cannot put too much pressure on the client!
The last point is that the success or failure of the bulk operation does not affect other operations. There is no specific parameter statistics, and the number of successes and failures of a bulk operation is also small.
Extension: In logstash, the transmission mechanism is actually bulk, but it uses buffer. If the server causes access latency, it may adopt retransmission, other failures are discarded ....
Elasticsearch -- bulk batch import data