LOGSTASH-INPUT-JDBC Configuration Instructions

Source: Internet
Author: User
Tags sql server driver kibana logstash

The Logstash is constructed from three components, namely input, filter, and output. We can do it, Logstash. The workflow of three components is understood as: input collects data, filter processes the data, and outputs output data. The question of how to collect, where to collect, what to do, what to do, how to do it, and where to send it, and so on, is one of the main points we're going to discuss.
Let's talk about the input component's functionality and basic plug-ins today. Before we introduced, input component is Logstash's eyes and nose, responsible for collecting data, then we have to think about two questions, the first question to be clear is, where the metadata, of course, this contains the metadata is what type, belong to what business The second question is how to get the meta data. As long as the two problems are understood, then the input component of Logstash is even clear.
For the first question, there are many types of metadata, such as your metadata can be a log, a report, something that can be a database, and so on. What the metadata looks like we don't need to care about what kind of metadata we want to relate to, as long as you know what kind of metadata it is, you can classify it, or give him a type, which is important, and type is very helpful for working with you later on. So the first question of the center of gravity Meta data is, what, now is clear. Then proceed to the second question.
The core of the second question is how to get these different types of raw data? This is the core of a total input component, and we classify it to look at and solve a problem.
First of all, we certainly need to identify what kind of data source, we need to use the way to get the data.
We list several:
1. File type: File type, as the name implies, file data source, we can use the input component's file plug-in to obtain data. file{} plugin has a lot of property parameters, we can open to explain. The specific content is shown in the following code:

input{file{#path属性接受的参数是一个数组, which means to indicate the file location to be read path=[' patha ', ' PATHB ']#表示多就去path路径下查看是够有新的文件产生.        The default is 15 seconds to check. Discover_interval= 15#排除那些文件, which is not to read the files exclude=[' fileName1 ', ' fileNmae2 ']#被监听的文件多久没更新后断开连接不在监听, the default is one hours. Close_older= 3600#在每次检查文件列 table, if the last modification time of a file exceeds this value, the file is ignored.        The default day. Ignore_older= 86400how often does the #logstash check for the status of the monitored files (if there is an update), by default1seconds. Stat_interval= 1#sincedb记录数据上一次的读取位置的一个index Sincedb_path=> ' $HOME/. Sincedb ' #logstash from where to start reading the file data, the default is the end position can also be set to: Beginning from the beginning Start_position=> ' beginning ' #注意: Here's what to remind you, if you need to read the file from the same start every time, turn the settings start_position=> Beginning is no use, you can choose Sincedb_path defined as/dev/null}}

2. Database type: The data source of the database type means that we need to deal with the database, right? Yes! That's a must, otherwise how to get the data. How does the input component get data for a database class? Yes, the following will be grand debut is the input component of the JDBC plugin jdbc{}. Similarly, jdbc{} has a lot of attributes, which we explain in the code below;

input{jdbc{#jdbc SQL Server Driver,each database has a corresponding driver, you need to download Jdbc_driver_library= "/etc/logstash/driver.d/sqljdbc_2.0/enu/sqljdbc4.jar"#jdbc class different databases have different class configurations Jdbc_driver_class= "Com.microsoft.sqlserver.jdbc.SQLServerDriver"#配置数据库连接 IP and ports, and database Jdbc_connection_string= "jdbc:sqlserver://200.200.0.18:1433;databasename=test_db"#配置数据库用户名 Jdbc_user=> #配置数据库密码 Jdbc_password=> #上面这些都不重要, if these are not understood, your boss probably wants to consider a substitution.    What's important is what's next. # How often does the timer execute SQL, the default is one minute # schedule=> Timeshare Month Year # Schedule= 22indicates that schedule is executed at 22 o ' Day= "*"Records of #是否清除 Last_run_metadata_path,if it's true, then every time it's equivalent to querying all the database records from the beginning Clean_run=> False #是否需要记录某个column value, if Record_last_run is true,You can customize the field names that we need for the table, #此时该参数就要为 true. Otherwise, the default track is the value of timestamp. Use_column_value=> True #如果 use_column_value is true,you need to configure this parameter. This parameter is the name of a field given by the database. Of course, the field must be incremented, which can be the data time of the database, such as Tracking_column=> Create_time #是否记录上次执行结果, if true, the value of the Tracking_column field that was last executed is recorded,save to Last_run_metadata_path specified file Record_last_run=> True #们只需要在 the Where my_id >: Last_sql_value in the SQL statement. Where: Last_sql_value gets is the value in the file Last_run_metadata_pa Th= "/etc/logstash/run_metadata.d/my_info"#是否将字段名称转小写. #这里有个小的提示, if you have processed the data once before and have a corresponding search requirement in Kibana, it will be true, #因为默认是true, and Kibana is case-sensitive. Exactly, it should be ES case-sensitive lowercase_column_names=> False #你的SQL的位置, of course, your SQL can also be written directly here. #statement=> SELECT * from tabename t WHERE t.creat_time >: Last_sql_value statement_filepath= "/etc/logstash/statement_file.d/my_info.sql"#数据类型, indicating that you belong to that party.    The single es where good to arrange for you different hills. Type= "My_info"#注意: The external SQL file is a text file, and it is important to note that a jdbc{} plugin can only process an SQL statement, #如果你有多个SQL需要处理的话, can only re-establish a jdbc{} plugin. }

Okay, no more nonsense, and then the third case:

input {  beats {    #接受数据端口     = 5044    #数据类型     = "Logs"  }  # This plug-in needs and filebeat to do with the very here not much to talk about, then combine together to introduce. }

Now we basically know what the input component needs to do and how to do, of course, he also has a lot of plug-ins can be collected data, such as TCP, and can be encode to the data, these interested friends can see for themselves, I said only my own use. In general, I said three kinds of plugins are enough.
The Logstash input component of today's elk species is here. Another component of Logstash, filter and output, is also described later.

Reprinted from: https://yq.aliyun.com/articles/152043

Output of Logstash, viewing https://yq.aliyun.com/articles/197785?spm=5176.8091938.0.0.pg7WLz

LOGSTASH-INPUT-JDBC Configuration Instructions

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.