Sqoop installation is also very simple. After sqoop is installed, you can test whether it can be connected to mysql (Note: The jar package of mysql should be placed under SQOOP_HOMElib ):
Sqoop installation is also very simple. After sqoop is installed, you can test whether it can be connected to mysql (Note: The jar package of mysql should be placed under SQOOP_HOME/lib ):
The installation and configuration of Hadoop will not be discussed here.
Sqoop installation is also very simple.
Install and use Sqoop
After sqoop is installed, you can test whether it can be connected to mysql (Note: The jar package of mysql should be placed under SQOOP_HOME/lib ):
Sqoop list-databases -- connect jdbc: mysql: // 192.168.1.109: 3306/-- username root -- password 19891231
The result is as follows:
That means sqoop can be used normally.
Install and configure Hadoop2.2.0 on CentOS
Build a Hadoop environment on Ubuntu 13.04
Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1
Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)
Configuration of Hadoop environment in Ubuntu
Detailed tutorial on creating a Hadoop environment for standalone Edition
Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)
Next, import data from mysql to hadoop.
I have prepared an ID card data table with 3 million data entries:
Start hive first (use the command line: hive to start it)
Then use sqoop to import data to hive:
Sqoop import -- connect jdbc: mysql: // 192.168.1.109: 3306/hadoop -- username root -- password 19891231 -- table test_sfz -- hive-import
Sqoop starts a job to complete the import.
The import was completed at 2 minutes 20 seconds, which is good.
You can see the imported data table In hive:
Let's use an SQL statement to test the data:
Select * from test_sfz where id <10;
It can be seen that it took nearly 25 seconds for hive to complete this task, which is indeed quite slow (almost no time-consuming in mysql), but considering that hive created a job to run in hadoop, of course there are many times.
Next, we will test the complex query of the data:
My sub-configuration is as follows:
Hadoop is a pseudo-distributed running on virtual machines, and the virtual machine OS is ubuntu12.04 64-bit. The configuration is as follows:
For more details, please continue to read the highlights on the next page: