[Spark base]--spark streaming data reception optimization

Source: Internet
Author: User

Thanks for the original link: https://www.jianshu.com/p/a1526fbb2be4

Before reading this article, please step into the spark streaming data generation and import-related memory analysis, the article is focused on from the Kafka consumption to the data into the Blockmanager of this line analysis.

This content is a personal experience, we use the time or suggest a good understanding of the internal principles, not to copy receiver evenly distributed to your executor

In the spark streaming data generation and import-related memory analysis I said a word:

I found that in the case of large amounts of data, the easiest thing to hang off is the executor of receiver. It is recommended that the spark streaming team best be able to write data to multiple Blockmanager.

From the current API, there is no way to provide this. But spark streaming provides the ability to read multiple topic at the same time, each topic a inputstream. We can reuse this feature with the following code:

Val Kafkadstreams = (1 to kafkadstreamsnum). map {_ ~ = Kafkautils.createstream (
SSC, 
zookeeper, 
groupId, 
Map ("Your topic", 1),  
if (memoryonly) storagelevel.memory_only else storagelevel.memory_and_disk_ser_2) }
val Uniondstream = ssc.union (kafkadstreams)
Uniondstream

Kafkadstreamsnum is your own definition of how many executor you want to start receiver to receive Kafka data. My experience value is 1/4 executors number. Because the data to do replication general, so that the maximum memory can account for 1/2 of the storage.

Also, be sure to set spark.streaming.receiver.maxRate for your system. Assuming you start N Receiver, your system will actually receive no more than n*maxrate, meaning that the maxrate parameter is set for each Receiver. reduce the use of non-storage memory

That is, we try to keep the data occupied by Spark's storage memory. The method is to turn the spark.streaming.blockInterval down a little bit. Of course, it will also cause a side effect, that is, Input-block will be more. The number of input-block produced by each receiver is: batchinterval* 1000/blockinterval. This assumes that your batchinterval is in seconds. Blockinterval actually I do not know what will affect. In fact, it is to prevent the pressure of GC. A big problem with real-time computing is the GC. reduce the memory of a single executor

It is generally not recommended to make Executor memory too large in spark streaming. The GC is a pressure, big memory a FULLGC more terrible, it is likely to drag down the entire calculation. More executor can be more tolerant.

Author: I wish William
Links: Https://www.jianshu.com/p/a1526fbb2be4
Source: Pinterest
Copyright belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please specify the source.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.