Discover spark streaming kafka example, include the articles, news, trends, analysis and practical advice about spark streaming kafka example on alibabacloud.com
Contents of this issue:1 Online Dynamic Computing classification the most popular products case review and demonstration2 Case-based penetration Spark Streaming the operating sourceFirst, the case codeDynamically calculate the hottest product rankings in different categories of e-commerce, such as the hottest three phones in the phone category, the hottest three TVs in the TV category, etc.Package Com.dt.sp
1. Background overview
There is a certain demand in the business, in the hope of real-time to the data from the middleware in the already existing dimension table inner join, for the subsequent statistics. The dimension table is huge, with nearly 30 million records, about 3g data, and the cluster's resources are strained, so you want to squeeze the performance and throughput of spark streaming as much as po
" Com.iwaimai.huatuo.QNetworkWordCount "--master spark://doctorqdemacbook-pro.local:7077/users/doctorq/documents/ Developer/idea_workspace/streaming/target/scala-2.11/streaming-assembly-1.0.jar localhost 9999
Summary
Mainly through such an example to comb the idea under the Scala development project has been package
executor, needs to the data scale appraisal, has the resource appraisal, has made the assessment to the existing resources idle, for example whether decides needs more resources, Data in the Batchduration stream will have data shards, each data shard processing needs to be more than cores, if not enough to apply with many executors.SS provides the elastic mechanism, see the speed of the slip in and processing speed relationship, whether time to deal
. We must find a good balance between the two parameters, because we do not want the data block to be too large, and do not want to wait too long for localization. We want all tasks to be completed within several seconds.
?? Therefore, we changed the localization options from 3 s to 1 s, and we also changed the block interval to 1.5 s.
--conf "spark.locality.wait=1s" --conf "spark.streaming.blockInterval=1500ms" \2.6 merge temporary files
?? Inext4In the file system, we recommend that you enable
) The exception here is because the Kafka is reading the specified offset log (here is 264245135 to 264251742), because the log is too large, causing the total size of the log to exceed Fetch.message.max.bytesThe Set value (default is 1024*1024), which causes this error. The workaround is to increase the value of fetch.message.max.bytes in the parameters of the Kafka client.For
First, the Java Way development1, pre-development preparation: Assume that you set up the spark cluster.2, the development environment uses Eclipse MAVEN project, need to add spark streaming dependency.3. Spark streaming is calculated based on
() }Integration with Spark SQL and DF
Example
This is similar to the control logic.Cache
For window operations, the data received by default is persist in the memory.
For flume and kafka source, replicate the data received by default is saved in two copies.Checkpoint
The result RDD of state-related streamcompute will be directed to HDFS by cp. The original Artic
once (no loss, no redundancy). This is the best case, although it is difficult to ensure that it is implemented in all use cases.
Another aspect is state management: there are different policies for state storage, and Spark streaming writes data to the Distributed file system (for example, HDFs), Samza uses embedded key-value storage, and in storm, or rolls
you to run parallel on a series of fault-tolerant computers while running your data flow code. In addition, they all provide a simple API to simplify the complexity of the underlying implementation.The terms of the three frameworks are different, but the concept of their representation is very similar:Comparison chartThe following table summarizes some of the differences:Data transfer forms fall into three main categories:
At most one time (at-most-once): Messages may be lost, which is
Schema background spark parameter optimization increase Executor-cores resize executor-memory num-executors set first deal decompression policy x Message Queuing bug bypass PHP end limit processing Action 1 processing speed increased from 1 to 10 peak Period non-peak status description increased from 10 to 50 peak off-peak status description use pipeline to elevate the QPS of the Redis 50 to a full-scale PM period Peak State Analysis
Architecture
back
1. Join for different time slice data streams
After the first experience, I looked at Spark WebUi's log and found that because spark streaming needed to run every second to calculate the data in real time, the program had to read HDFs every second to get the data for the inner join.
Sparkstreaming would have cached the data it was processing to reduce IO and incr
In order to better understand the processing mechanism of the spark streaming sub-framework, you have to figure out the most basic concepts yourself.1. Discrete stream (discretized stream,dstream): This is the spark streaming's abstract description of the internal continuous real-time data stream, a real-time data stream We're working on, in
= simplehbaseclient.bulk ( iter) }}Why do you want to make sure you put it in these functions like Foreachrdd/map?The mechanism of Spark is to first run the user's program as a single machine (the runner is driver), and driver the function specified by the corresponding operator to executor for execution through the serialization mechanism. Here, functions such as Foreachrdd/map are sent to the executor execution, and the driver side is no
Contents of this issue:
Batchduration and Process time
Dynamic Batch Size
There are many operators in Spark streaming, are there any operators that are expected to be similar to the linear law of time consumption?For example: Does the time consumption of processing data for join operations and normal map operations present a consistent linear pa
=channel1# Other properties is specific to each type of yhx.hadoop.dn01# source, channel, or sink. Inch This Case, we# Specify the capacity of the memory channel.tier1.channels.channel1.capacity= 100The Spark Start command is as follows:Spark-submit--driver-memory 512m--executor-memory 512m--executor-cores 1 --num-executors 3--class Com.hark.SparkStreamingFlumeTest--deploy-mode cluster--master Yarn/opt/spark
Nonsense not to say, first, an example, a perceptual knowledge to introduce.This example comes from the example of Spark's own, and the basic steps are as follows:(1) Use the following command to enter a stream message:
$ nc-lk 9999
(2) Run Networkwordcount in a new terminal to count the number of words and output:
$ bin/run-
Kafka Consumer API Example 1. Auto-confirm OffsetDescription Reference: http://blog.csdn.net/xianzhen376/article/details/51167333Properties Props = new properties ();/* Defines the address of the KAKFA service and does not require all brokers to be specified on */props. put ("Bootstrap.servers","localhost:9092");/* Develop consumer group */props. put ("Group.id","Test");/* Whether to automatically confirm t
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.