How Spark writes Hbase/redis/mysql/kafka

Source: Internet
Author: User
Tags redis serialization

Some concepts

A partition corresponds to a task, a task must exist in a executor, and a executor corresponds to a JVM.

    • Partition is a collection of data that can be iterated
    • Task essence is a thread that acts on partition
Problem

How does the Task use Kafka Producer to send data to Kafka? The same is true for others such as Hbase/redis/mysql.

Solution Solutions

The intuitive solution is naturally able to have a prodcuer pool (or share a single producer instance) in the executor (JVM), but our code is
Execute at the driver end, then serialize some functions to the executor side, there is a serialization problem, as normal as pool,connection is not serializable.

A simple solution is to define an object class,

Such as

object Simplehbaseclient {  private val default_zookeeper_quorum = "127.0.0.1:2181"   Private Lazy Val (table, conn) = createconnection  = {      Items.foreach (Conn.put (_)      ) Conn.flush ....  }  ......}

This class is then guaranteed to be used under functions such as Map,foreachrdd, for example:

dstream.foreachrdd{Rdd =    rdd.foreachpartition{iter=        simplehbaseclient.bulk ( iter)      }}

Why do you want to make sure you put it in these functions like Foreachrdd/map?
The mechanism of Spark is to first run the user's program as a single machine (the runner is driver), and driver the function specified by the corresponding operator to executor for execution through the serialization mechanism. Here, functions such as Foreachrdd/map are sent to the executor execution, and the driver side is not executed. The object class referenced inside will be serialized as a stub, and the initialization of the object's internal properties is actually done on the executor side, so it is possible to avoid serialization problems.

Pool is a similar practice. However, we do not recommend using pool, because spark itself is already distributed, for example there may be 100 executor, if each executor 10 more connection
Pool, there will be a 100*10 link, Kafka can not stand. It's good to have a executor to keep a connection.

About executor hanging off the data, in fact, it depends on when you flush, this is a performance tradeoff.

How Spark writes Hbase/redis/mysql/kafka

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.