Recently, data has been tossing, originated from the need to convert multiple business data sources to the data analysis platform. This process requires cross-machine and cross-database. At the same time, you also need to convert, merge, and clean the content of the business data table.
After multi-party selection, kettle was finally used as a data extraction and processing tool.
However, here we want to talk about kettle.
1. kettle uses 5.1. After a kettle cluster is built, the common problem is that the Virtual File System is abnormal, the job, or the conversion "is a not a file. ". The solution found on the internet is that the remote upload job is XML, but the xml header is lacking. Therefore, a problem occurs during XML parsing. However, it is too difficult to manually add and then specify a release. Fortunately, the chances of such a release are not high. Now let's just let it go. Let's talk about it first. I can't stand maintaining a set of systems in the future and fix this bug (this bug has not been fixed in the buglist on the official website for many years. It is really ....);
2. Kettle jobs and conversions are continuously visible by default, regardless of whether they are finished or not. However, the jobs that are executed continuously and regularly become full after running for a period of time.
This effect is especially uncomfortable, and the persistence of such logs will also lead to JVM oom. However, some parameters are configured:
<Slave_config>
<Masters>
<Slaveserver>
<Name> 10.172.7.12 </Name>
<Hostname> 10.172.7.12
<Port> 8181 </port>
<Username> admin </username>
<Password> admin </password>
<Master> Y </Master>
</Slaveserver>
</Masters>
<Report_to_masters> Y </report_to_masters>
<Slaveserver>
<Name> 10.172.7.13 </Name>
<Hostname> 10.172.7.13
<Port> 8181 </port>
<Username> cluster </username>
<Password> cluster </password>
<Master> n </Master>
</Slaveserver>
</Slave_config>
Then, it is found that the port cannot be released after the cluster runs the job. So again, we can only accept the reality of restarting the cluster after a while.
3. Some people say that the starting function is not reliable and is not recommended by the official team. Is it necessary to rely on code for data maintenance?
4. Although Java is recommended to call kettle APIs to operate scheduled tasks and jobs and convert tasks, I hope this data processing process does not depend on code. This ensures quick matching during business expansion without the need for development and access code for maintenance. However, the quality and effect of this operation do not know how long it will take.
Bi ETL Learning (1) kettle