“决胜云计算大数据时代”
Spark亚太研究院100期公益大讲堂 【第15期互动问答分享】
Q1:AppClient和worker、master之间的关系是什么?
:AppClient是在StandAlone模式下SparkContext.runJob的时候在Client机器上应 用程序的代表,要完成程序的registerApplication等功能;
当程序完成注册后Master会通过Akka发送消息给客户端来启动Driver;
在Driver中管理Task和控制Worker上的Executor来协同工作;
Q2:Spark的shuffle 和hadoop的shuffle的区别大么?
Q3:Spark 的HA怎么处理的?
对于Master的HA,在Standalone模式下,Worker节点自动是HA的,对于Master的HA,一般采用Zookeeper;
Utilizing ZooKeeper to provide leader election and some state storage, you can launch multiple Masters in your cluster connected to the same ZooKeeper instance. One will be elected “leader” and the others will remain in standby mode. If the current leader dies, another Master will be elected, recover the old Master’s state, and then resume scheduling. The entire recovery process (from the time the the first leader goes down) should take between 1 and 2 minutes. Note that this delay only affects scheduling new applications – applications that were already running during Master failover are unaffected;
对于Yarn和Mesos模式,ResourceManager一般也会采用ZooKeeper进行HA;
【互动问答分享】第15期决胜云计算大数据时代Spark亚太研究院公益大讲堂