Use Sqoop to import MySQL Data to Hadoop

Source: Internet
Author: User
Tags sqoop
Sqoop installation is also very simple. After sqoop is installed, you can test whether it can be connected to mysql (Note: The jar package of mysql should be placed under SQOOP_HOMElib ):

Sqoop installation is also very simple. After sqoop is installed, you can test whether it can be connected to mysql (Note: The jar package of mysql should be placed under SQOOP_HOME/lib ):

The installation and configuration of Hadoop will not be discussed here.

Sqoop installation is also very simple.

Install and use Sqoop

After sqoop is installed, you can test whether it can be connected to mysql (Note: The jar package of mysql should be placed under SQOOP_HOME/lib ):

Sqoop list-databases -- connect jdbc: mysql: // 192.168.1.109: 3306/-- username root -- password 19891231

The result is as follows:

That means sqoop can be used normally.

Install and configure Hadoop2.2.0 on CentOS

Build a Hadoop environment on Ubuntu 13.04

Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1

Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)

Configuration of Hadoop environment in Ubuntu

Detailed tutorial on creating a Hadoop environment for standalone Edition

Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)

Next, import data from mysql to hadoop.

I have prepared an ID card data table with 3 million data entries:

Start hive first (use the command line: hive to start it)

Then use sqoop to import data to hive:

Sqoop import -- connect jdbc: mysql: // 192.168.1.109: 3306/hadoop -- username root -- password 19891231 -- table test_sfz -- hive-import

Sqoop starts a job to complete the import.

The import was completed at 2 minutes 20 seconds, which is good.

You can see the imported data table In hive:

Let's use an SQL statement to test the data:

Select * from test_sfz where id <10;

It can be seen that it took nearly 25 seconds for hive to complete this task, which is indeed quite slow (almost no time-consuming in mysql), but considering that hive created a job to run in hadoop, of course there are many times.

Next, we will test the complex query of the data:

My sub-configuration is as follows:

Hadoop is a pseudo-distributed running on virtual machines, and the virtual machine OS is ubuntu12.04 64-bit. The configuration is as follows:

For more details, please continue to read the highlights on the next page:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.