Original link Address: http://www.transwarp.cn/news/detail?id=173
ETL is an important link in building data Warehouse. Through this process the user extracts the required data and imports the data warehouse according to the defined model. Because ETL is the necessary process of building data Warehouse, its efficiency will affect the construction of the whole data warehouse, so its effective tuning is of hig
This document describes the ETL testing process and general project conditions to describe the ETL testing method.
ETL test Flowchart
Test phase
1,Requirement Analysis
Familiar with business processes and business rules, analyze the ing relationship between the source table and the target table as required, and parse the business data flow diagram:
1,Test Ana
The main indexes of this article series are as follows:
First, ETL sharp weapon Kettle Practical Application Analysis Series one "Kettle Use introduction"
Second, ETL sharp weapon Kettle Practical Application Analysis Series two "application Scenarios and actual combat demo Download"
Three, ETL sharp weapon Kettle Practical Application Analysis Series three "
caused by abuse.Acronyms, idioms, data input errors, repeated records, lost values, and spelling changes. Even if there is a large amount of noise data in a well-designed and well-planned database system, this system will alsoIt makes no sense, because "garbage in, garbage out" (garbage in, garbageThe system cannot provide any support for the decision analysis system. To clear noise data, data must be cleaned in the database system. At present, there are a lot of research on data cleansing and
In the data warehouse project, ETL is undoubtedly the most tedious, time-consuming, and unstable. If the data source and target are both Oracle and meet certain conditions, you can use the oracle tablespace to improve ETL efficiency.To use a tablespace, the following conditions must be met:The source and target databases must both be larger than 8i;Ø for versions earlier than 10 Gb, the source and target da
Kettle is an open-source ETL Tool written in Java. It can be run on Windows, Linux, and Unix. It does not need to be installed green, and data extraction is efficient and stable.
Business Model: there is a large table in a relational database, which is designed as a parity database storage. Each database has 100 identical tables, each table stores 1000 million data records, and the fields are switched to the next table. This data needs to be synchron
the "flat file Connection Manager Editor" dialog box, type sample flat file source data.
Click Browse ".
In the open dialog box, browse and find the sample data folder, and then open the samplecurrencydata.txt file. By default, the sample data of the tutorial is installed in the c: \ Program Files \ Microsoft SQL Server \ 90 \ samples \ integration services \ tutorial \ creating a simple ETL package
In the past, we used the underlying C-API of each database as wrapping to realize the function of data import and export between several heterogeneous databases. However, the code is complex and it is inconvenient to open source.
In the afternoon, a simple data extraction program was written in Java to port the MySQL database to Sybase ASE. Put it open-source, put it on: http://code.google.com/p/jmyetl/ top. I originally named myetl, and someone applied for it on sf.net. Then I added a J to it.
ETL scheduling development (4) -- file subroutine loading through FTP
The most basic function of the ETL tool is to load files on the remote server. The following applet obtains files on the remote server in binary mode:
#! /Usr/bin/bash # created by lubinsu #2014 source ~ /. Bash_profilefilename = $6 srcdir = $4 descdir = $5 ftpip = $1 ftpusr = $2 ftppwd = $3 # get filesftp-I-in
The input parameters
In this section, we mainly talk about my game transaction Data Analysis Project ETL (data extraction, loading, conversion) exactly how to do.
First of all, the next source system, because our main trading station server is not in the company, so can not directly from the source system directly extracted data. In fact, we already have a simple data analysis system, but this is the previous people do, not using sqlserver2005 bi platform to do, but dire
The kettle of ETL tools extracts data from one database into another database:
1. Open the ETL folder, double-click Spoon.bat start Kettle
2. Resource pool selection, Connaught no choice to cancel
3. Select Close
4. Create a new transformation
5. Configure the required database
6. The data table that needs to be extracted, with the table input to get
7. Select the database and table
Tags: ETL kettle jdbc Oracle RAC1 problem Phenomena:Previously done Kettle connect an Oracle database for table extractionThe table input information for the script is as follows:Error message in the table input report when executing (script uploaded to Linux machine with sh command) :But in the machine with the Sqlplus command login can be successful:2 resolution process:After the problem, the first contact with the source data system manufacturers t
ETL is responsible for the scattered, heterogeneous data sources such as relational data, flat data files, such as the extraction of the temporary middle layer after the cleaning, transformation, integration, and finally loaded into the data warehouse or data mart, as the basis for online analysis processing, data mining. The term ETL often appears in the Data warehouse, but its object is not confined to th
During the three-day holiday on May Day, some ETL logic problems occurred, resulting in the daily incremental data to be loaded into DW is not loaded as designed. Therefore, you need to check the generated incremental data after ETL to avoid the problem of passive processing when the incremental data is lost one day.
Requirement: if there is a problem with the incremental data of
What does the ETL data conversion system bring to customers?With the development of society and computer technology, people began to reprocess the data in the original database to form a comprehensive and analysis-oriented environment to support the emergence of scientific decision-making. As a result, the ideas, technologies, and products of data warehouses are gradually formed. The purpose of building a data warehouse is to establish a systematic da
Label:The collation of SQL Server is roughly divided into Windows collation and SQL Servers collation. When the data is installed, defaults to Sql_latin1_general_cp1_ci_ai are not set by default. When the database is created, if you do not set a collation that uses the default data, you can also set the collation for the columns in the table.Here are just a few things to keep in mind when you have recently encountered such problems.First Sql_latin1_general_cp1_ci_ai corresponds to 1252, while ch
1, the definition of ETLETL is "Extract"," Transform","Load" the initials of three words namely "extract "," Conversion "," Loading ", but we are often referred to as the daily data extraction. ETL is the core and soul of BI/DW (Business intelligence/Data Warehouse), integrating and improving the value of data according to the unified Rules , is responsible for the completion of data from the data source to the target data Warehouse conversion proces
required to handle the second type of modification
Mini Dimension (minidimension):
Extract a few fields from a common large dimension to form a small field dimension that can be used in a query with a field in a mini dimension
This design significantly improves query efficiency
type of fact:
Granularity fact table (additive Fact)
Cycle Snapshot fact table (semi-additive Fact)
Aggregation Snapshot fact table (non-additive Fact)
Non-factual fact table (factless Fact table)
Granularity fact table
1, Ali Open source software: datax
Datax is a heterogeneous data source offline Synchronization tool that is dedicated to achieving stable and efficient data synchronization between heterogeneous data sources including relational databases (MySQL, Oracle, etc.), HDFS, Hive, ODPS, HBase, FTP, and more. (Excerpt from Wikipedia)
2. Apache Open source software: Sqoop
Sqoop (pronunciation: skup) is an open source tool that is used primarily in Hadoop (Hive) and traditional databases (MySQL, PostgreSQ
; mysql. sqlecho "userId"> mysql. sqlecho "case"> mysql. SQL sed-I-e '1d 'm.txt cat m.txt | while read line do par1 =1 (echo "$ {line}" | awk-f''' {print $1} ') par2 = $ (echo "$ {line}" | awk-F ''' {print $2 }') id = $ (echo "$ {line}" | awk-F ''' {print $3} ') echo "par1 :1 {par1}" echo "par2: $ {par2} "echo" when hour_time >=$ {par1} and hour_time
3) All scripts are stored in the database, and parameters are parsed and called and executed by the program.
Refer to kettle design:
Each
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.