Use Sqoop to import MySQL Data to Hadoop
The installation and configuration of Hadoop will not be discussed here.
Sqoop installation is also very simple. After Sqoop is installed and used, you can test whether it can be connected to mysql (Note: The jar package of mysql should be placed under SQOOP_HOME/lib ): sqoop list-databases -- connect jdbc: mysql: // 192.168.1.109: 3306/-- username root -- password 19891231 the result is as follows: sqoop is ready for use.
Install and configure Hadoop2.2.0 on CentOS
Build a Hadoop environment on Ubuntu 13.04
Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1
Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)
Configuration of Hadoop environment in Ubuntu
Detailed tutorial on creating a Hadoop environment for standalone Edition
Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)
Next, import data from mysql to hadoop. I have prepared an ID card for 3 million data entries. Data Table: Start hive (use command line: hive to start it) and then use sqoop to import data to hive: sqoop import -- connect jdbc: mysql: // 192.168.1.109: 3306/hadoop -- username root -- password 19891231 -- table test_sfz -- hive-import sqoop starts the job to complete the import. The import was completed at 2 minutes 20 seconds, which is good. In hive, we can see the imported data table: Let's test the SQL statement: select * from test_sfz where id <10; we can see that, it took almost 25 seconds for hive to complete this task, which is indeed quite slow (almost no time-consuming in mysql), but it should be considered that hive created a job to run in hadoop, of course there are many times. Next, we will test the complex query of the data: the sub-configuration of our machine is as follows: hadoop is a pseudo-distributed running on the virtual machine, and the virtual machine OS is ubuntu12.04 64-bit. The configuration is as follows:
For more details, please continue to read the highlights on the next page: