The previous blog said that using LOGSTASH-INPUT-JDBC to synchronize MySQL data to es (http://www.cnblogs.com/jstarseven/p/7704893.html), but there is a problem, That is, if I do not need logstash automatically to the MySQL data provided by the mapping template, after all, my data need ik participle, synonym parsing and so on ...
This time need to use the Logstash template function, if it is not yet logstash and LOGSTASH-INPUT-JDBC installation of the use of the recommended way to first look at an article.
OK, first look at the configuration file mysql.conf (which will need to be modified when configuring the template ) before simply using LOGSTASH-INPUT-JDBC to import es:
Input {stdin {} JDBC {# database jdbc_connection_string = "Jdbc:mysql://127.0.0.1:3306/test" # with Username password Jdbc_user = "root" Jdbc_password = "123456" # Jar Package Location jdbc_driver_library = "/usr/loc Al/logstash-5.5.2/bin/config-mysql/mysql-connector-java-5.1.31.jar "# mysql driver jdbc_driver_class =" com. Mysql.jdbc.Driver "jdbc_paging_enabled =" true "Jdbc_page_size =" 50000 "#statement_filepath "Config-mysql/test02.sql" statement "SELECT * from my_into_es" schedule = "* * * * * *" #索引的类型 Type = "My_into_es_type"}}filter {json {Source = "message" Remove_field = ["Message"] }}output {elasticsearch {hosts = "127.0.0.1:9200" # index name index = = "My_into_es_index # There is an ID field in the database that needs to be associated, the ID number of the corresponding index document_id = "%{id}"} stdout {codec = Json_lines} }
Now, let's see how template Templates work:
The first one takes me personally. This is called a dynamic template: Dynamic_templates can be used to match a type of field mapping
1. Switch Path cd/usr/local/logstash-5.5.2 directory
2. New Template directory mkdir template
3. CD Template
4. New File Logstash-ik.json
5. Edit the contents of the file:
{" template": "*", "Version": 50001, "settings": {"Index.refresh_interval": "5s"}, "mappings": {"_default_": { "_all": {"Enabled": True, "Norms": false}, "Dynamic_templ Ates ": [{" Message_field ": {" Path_match ":" Message ", "Match_mapping_type": "string", "mapping": {"type": "Text" , "Norms": false}}, {"String_fields": {"Match": "*", "Match_mapping_type": "String", "mapping": {"type": "Text", "norm S ": false," analyzer ":" Ik_max_word "," fields ": { "Keyword":{"type": "Keyword"}} }}], "Properties": {"@times Tamp ": {" type ":" Date "," Include_in_all ": false}," @ Version ": {" type ":" Keyword "," Include_in_all ": false}} } }}~
Note: Notice the "template" in the above file : "*", this will overwrite Logstash itself (when not specifying the mapping template) the default data mapping matching template, if you do not want to override the default to change the name of the line
6. Cd/usr/local/logstash-5.5.2/bin/config-mysql
7. New File mkdir mysql-ik-define.conf
File contents:
Input {stdin {} JDBC {# database jdbc_connection_string = "Jdbc:mysql://127.0.0.1:3306/test" # with Username password Jdbc_user = "root" Jdbc_password = "123456" # Jar Package Location jdbc_driver_library = "/usr/loc Al/logstash-5.5.2/bin/config-mysql/mysql-connector-java-5.1.31.jar "# mysql driver jdbc_driver_class =" com. Mysql.jdbc.Driver "jdbc_paging_enabled =" true "Jdbc_page_size =" 50000 "#statement_filepath "Config-mysql/test02.sql" statement "SELECT * from my_into_es_define" schedule = "* * * * * *" #索引的 Type = "Into_es_type_define_ik"}}filter {json {Source = "message" Remove_field => ; ["Message"]}} Output {elasticsearch {hosts = "127.0.0.1:9200" # index name index = "Into_es_index_define_i K "# There is an ID field in the database that needs to be associated, the ID number of the corresponding index document_id ="%{id} "Template_overwrite = True template = "/usr/local/logstash-5.5.2/template/logstash-ik.json"} stdout {codec = Json_lines}}
Note: The color above is the template configuration, the other basic unchanged
8. Cd/usr/local/logstash-5.5.2/bin
9. Execute the command:./logstash-f config-mysql/mysql-ik-define.conf
Observation log:
10, we take the Elasticsearch-head plug-in to see the new good mapping:
As we had expected, the data was also successfully imported:
Summary: This kind of disposition way personal feel more flexible can do mapping to the field by class distinction
The second use I personally refer to it as a static template (in fact, and the basic consistency above), is the template file is not the same, mapping for each field to write Dead is good:
Todo
Logstash use template to set up maping sync MySQL data to Elasticsearch5.5.2 in advance.