Some concepts
A partition corresponds to a task, a task must exist in a executor, and a executor corresponds to a JVM.
- Partition is a collection of data that can be iterated
- Task essence is a thread that acts on partition
Problem
How does the Task use Kafka Producer to send data to Kafka? The same is true for others such as Hbase/redis/mysql.
Solution Solutions
The intuitive solution is naturally able to have a prodcuer pool (or share a single producer instance) in the executor (JVM), but our code is
Execute at the driver end, then serialize some functions to the executor side, there is a serialization problem, as normal as pool,connection is not serializable.
A simple solution is to define an object class,
Such as
object Simplehbaseclient { private val default_zookeeper_quorum = "127.0.0.1:2181" Private Lazy Val (table, conn) = createconnection = { Items.foreach (Conn.put (_) ) Conn.flush .... } ......}
This class is then guaranteed to be used under functions such as Map,foreachrdd, for example:
dstream.foreachrdd{Rdd = rdd.foreachpartition{iter= simplehbaseclient.bulk ( iter) }}
Why do you want to make sure you put it in these functions like Foreachrdd/map?
The mechanism of Spark is to first run the user's program as a single machine (the runner is driver), and driver the function specified by the corresponding operator to executor for execution through the serialization mechanism. Here, functions such as Foreachrdd/map are sent to the executor execution, and the driver side is not executed. The object class referenced inside will be serialized as a stub, and the initialization of the object's internal properties is actually done on the executor side, so it is possible to avoid serialization problems.
Pool is a similar practice. However, we do not recommend using pool, because spark itself is already distributed, for example there may be 100 executor, if each executor 10 more connection
Pool, there will be a 100*10 link, Kafka can not stand. It's good to have a executor to keep a connection.
About executor hanging off the data, in fact, it depends on when you flush, this is a performance tradeoff.
How Spark writes Hbase/redis/mysql/kafka