Big Data practices: ODI and Twitter (i)

Source: Internet
Author: User
Tags hadoop fs

This paper usesTwitteras a data source, describes the use ofOracleBig Data platform andOralce Data Integratortool, complete fromTwitterextract the data,Hadoopprocess data on the platform and eventually load it intoOracledatabase.

Data integration is divided into three phases: acquisition, collation, analysis, and decision making.

This article starts from the reality, narrates the big data processing real case, but is not simply tells the theory knowledge. The first thing to do is load data from Twitter to HDFS using flume, whichis actually loaded into the Hive database and then exploited (Oracle Data integrator) ODI to reverse engineer and transform JSON data in Hive , eventually loading the processed structured data into relational data Oracle .

The components involved

to build a completeHadoopplatform is a relatively troublesome work, this article will not do too much description in this piece , in this example, directly using theOraclelarge data virtual machines available for download directly from the Internet ( Here), the inside has been configured withCDH,Hiveand other components, can be directly used to learn to use. In this example, the components of the following big data platforms are used in the virtual machine:

1. Hadoop Framework
2. HDFS
3. Hive
4. Flume
5. ZooKeeper
5. Oracle Database
6. Oracle Data Integrator

each component is not described in detail here, and the description and use of each component can refer to other articles on the network. Why use flume for data collection? Because Flume is configured from the source and target side, it is easy to get data from Twitter and load it into HDFS:


This example uses a ready- madeFlume-twitterplugins, fromTwittergets the data. The plugin usesJAVAdevelopment, using an open sourcesdk-twitter4j, this component completely covers theTwitterof theAPI. Using theFlumeplug-ins, without having to write their own code, can be implemented directly fromTwitterget the data and post it to your desired target platform as needed.

action steps

The first step is to create an account on Twitter, visit the https://dev.twitter.com/ Developer section, and then https:// apps.twitter.com/ creates the keys and tokens needed to access the data , which are used when configured in the flume plug-in, and the interface after the request is complete may look like this:

If you do not useOracleA large data platform virtual machine, you need to launch it in your own installed systemHadoopand other appropriate platforms, such asHive,ZooKeeperAnd , of course, installation configuration is required.ODI 12c. If you are using theOraclevirtual machine, the related service can be started.

ConfigurationFlumefromTwitterget the data. Download FirstFlumesoftware, and copy the correspondingLibto theFlumeThe specified directory, such as adding a downloadedJarto theFlumeof theClasspath:CP /usr/lib/flume-ng/plugins.d/twitter-streaming/lib/ /var/lib/flume-ng/plugins.d/ twitter-streaming/lib/

Create flume.conf , according to the previous Twitter ' s key and the Hadoop edit this file, almost as follows:


Next Configure Hive

1. need to be able to parse json download a compiled json SerDe

2. Create The directories and permissions required for Hive :

$ sudo-u HDFs Hadoop fs-mkdir/user/oracle/warehouse
$ sudo-u HDFs Hadoop fs-chown-r oracle:oracle/user/hive
$ sudo-u HDFs Hadoop fs-chmod 750/user/oracle
$ sudo-u HDFs Hadoop fs-chmod 770/user/oracle/warehouse

3. config hive metastore This example is in mysql created, not detailed here.

4. Create the tweets table in Hive :

Back to Flume to start crawling data:

1.in theHDFscreated in/user/oracle/tweetsCatalogue , for storingFlumecrawl of the data, at the same time, is alsoHivethe external reference path in the table

2. start flume using the following statement

In the run log, you can see that Flume is constantly writing the crawled data to the file in the corresponding directory:

LetFlumeAgentrun for a few minutes to confirm that you can stop the process after you have captured the data. And then inHadoop Webthe console can viewHDFsdata files in the corresponding directory:

View anyathe contents of the data file are as follows:

Data are based on JSON format that can be used in the Hive to view the number of records fetched:


afterFlumeA few minutes of data extraction and we've got16161record. This is the power of data generation in the big data age.

Big Data practices: ODI and Twitter (i)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.