Spark processes the Twitter data stored in hive

Source: Internet
Author: User
Tags json


This article describes some practical tips for using the Spark batch job to process Twitter data stored in hive.

First we need to introduce some dependency packs, as follows:
Name: = "sentiment" Version: = "1.0"

Scalaversion: = "2.10.6"

Assemblyjarname in assembly: = "Sentiment.jar"

Librarydependencies + + "Org.apache.spark"% "spark-core_2.10"% "1.6.0"% "provided"
Librarydependencies + + "Org.apache.spark"% "spark-sql_2.10"% "1.6.0"% "provided"
Librarydependencies + + "Org.apache.spark"% "spark-hive"% "1.6.0"% "provided"
Librarydependencies + + "EDU.STANFORD.NLP"% "STANFORD-CORENLP"% "3.5.1"
Librarydependencies + + "EDU.STANFORD.NLP"% "STANFORD-CORENLP"% "3.5.1" classifier "models"

resolvers + = "Akka Repository" at "http://repo.akka.io/releases/"

Assemblymergestrategy in assembly: = {
Case PathList ("Meta-inf", XS @ _*) => Mergestrategy.discard
Case x => Mergestrategy.first}

Write a Scala case class to store the parsed Twitter JSON data:
Case Class Tweet (coordinates:string, geo:string, handle:string,
Hashtags:string, Language:string,
Location:string, msg:string, time:string,
Tweet_id:string, Unixtime:string,
User_name:string, Tag:string,
Profile_image_url:string,
Source:string, place:string, friends_count:string,
Followers_count:string, Retweet_count:string,
Time_zone:string, Sentiment:string,
stanfordsentiment:string)

The following packages are introduced:
Import Java.util.Properties
Import Com.vader.SentimentAnalyzer
Import Edu.stanford.nlp.ling.CoreAnnotations
Import Edu.stanford.nlp.neural.rnn.RNNCoreAnnotations
Import Edu.stanford.nlp.pipeline.StanfordCoreNLP
Import Edu.stanford.nlp.sentiment.SentimentCoreAnnotations
Import org.apache.log4j. {level, Logger}
Import Org.apache.spark. {<span class= "wp_keywordlink_affiliate" ><a data-original-title= "View all Posts in Spark" href= "/archives/tag /spark "title=" "target=" _blank ">spark</a></span>conf, <span class=" Wp_keywordlink_affiliate " ><a data-original-title= "View all Posts in Spark" href= "/archives/tag/spark" title= "" target= "_blank" >spark </a></span>context}
Import Org.apache.spark.serializer.KryoSerializer

Import Org.apache.spark.sql._

A spark code fragment written in Scala for reading data from hive:
def main (args:array[string]) {
Logger.getlogger ("Org.apache.spark"). Setlevel (Level.error)
Logger.getlogger ("Org.apache.spark.storage.BlockManager"). Setlevel (Level.error)
Val Logger:logger = Logger.getlogger ("Com.iteblog.sentiment.TwitterSentimentAnalysis")
Val sparkconf = new sparkconf (). Setappname ("Twittersentimentanalysis")
Sparkconf.set ("spark.streaming.backpressure.enabled", "true")
Sparkconf.set ("Spark.cores.max", "32")
Sparkconf.set ("Spark.serializer", Classof[kryoserializer].getname)
Sparkconf.set ("spark.sql.tungsten.enabled", "true")
Sparkconf.set ("spark.eventLog.enabled", "true")
Sparkconf.set ("Spark.app.id", "sentiment")
Sparkconf.set ("Spark.io.compression.codec", "snappy")
Sparkconf.set ("Spark.rdd.compress", "true")
Val sc = new Sparkcontext (sparkconf)
Val sqlcontext = new Org.apache.spark.sql.hive.HiveContext (SC)
Import Sqlcontext.implicits._
Val Tweets = SqlContext.read.json ("Hdfs://www.iteblog.com:8020/social/twitter")
Sqlcontext.setconf ("Spark.sql.orc.filterPushdown", "true")
Tweets.printschema ()
Tweets.count
Tweets.take (5). foreach (println)

One of the things we need to be aware of is that we need to create the hive context rather than the standard SQL context

Before running our code, verify that the table that stores the Twitter JSON data in the hive, and whether the table that holds the result data exists, the table used to store the result data is in the ORC format
Beeline
!connect Jdbc:hive2://localhost:10000/default;
!set Showheader true;
Set hive.vectorized.execution.enabled=true;
Set Hive.execution.engine=tez;
Set hive.vectorized.execution.enabled =true;
Set hive.vectorized.execution.reduce.enabled =true;
Set hive.compute.query.using.stats=true;
Set hive.cbo.enable=true;
Set hive.stats.fetch.column.stats=true;
Set hive.stats.fetch.partition.stats=true;
Show tables;
Describe Sparktwitterorc;
Describe Twitterraw;
Describe Sparktwitterorc;
Analyze table Sparktwitterorc compute statistics;
Analyze table SPARKTWITTERORC compute statistics for columns;

The table named Twitterraw is used to store Twitter JSON data, and a table named Sparktwitterorc is a table for storing spark processing results.

How do you write data from Rdd or dataframe to the hive orc table? The operation is as follows:
OUTPUTTWEETS.TODF (). Write.format ("orc"). Mode (Savemode.overwrite). Saveastable ("Default.sparktwitterorc")

Setting JVM-related parameters at compile-time programs
Export sbt_opts= "-xmx2g-xx:+useconcmarksweepgc-xx:+cmsclassunloadingenabled-xx:maxpermsize=2g-xss2m- Duser.timezone=gmt "
SBT-J-XMX4G-J-XMS4G Assembly

To submit a spark job to the yarn cluster:
Spark-submit--class com.iteblog.sentiment.TwitterSentimentAnalysis--master yarn-client Sentiment.jar

Here we enclose our rawtwitter Table statement:
CREATE TABLE Rawtwitter
(
Handle STRING,
Hashtags STRING,
Msg STRING,
Language STRING,
Time STRING,
tweet_id STRING,
Unixtime STRING,
User_name STRING,
Geo STRING,
Coordinates STRING,
' Location ' STRING,
Time_zone STRING,
Retweet_count STRING,
Followers_count STRING,
Friends_count STRING,
Place STRING,
SOURCE STRING,
Profile_image_url STRING,
Tag STRING,
Sentiment STRING,
Stanfordsentiment STRING
)
ROW FORMAT Serde ' org.apache.hive.hcatalog.data.JsonSerDe '
LOCATION ' Hdfs://www.iteblog.com:8020/social/twitter '

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.