The RPC module for Spark is 1.x refactored, and the Akka class was used extensively in the previous code, in order to remove Akka from the project dependencies, all of which were added. Let's look at some of the main classes of the module
Use EA to add all classes of the module to understand the module or to understand Akka, Akka has actor and actorref two classes, one for receiving messages, and one for sending messages. Exactly two classes of rpcendpoint and rpcendpointref that should be modules. Here's a general introduction to these classes, with some Scala features 1:rpcaddress the class is a case class that represents the hostname and port number, and case class can also add methods that were previously thought not to be possible. Its associated objects are used from URIs, String constructs a Rpcaddress object 2:rpctimeout represents a time-out, saying that the duties of the class is a bit messy, and there is one way
def Awaitresult[t] (awaitable:awaitable[t]): T ={ try { Await.result (awaitable, duration)} catch Addmessageiftimeout}
 
Returns an object within a specified time, an await is an object in the Scala concurrency library, and result returns the execution result of the awaitable in the duration time slice, ready representsduration Time Slice awaitable State becomes complete, two methods are blocked,Awaitable is quite the future of Java, and of course Scala has a future class, which inherits the class. Its associated object is primarily a configuration file that gets the time value and then generates the object 3:rpcenvfactory that object is used to create a rpcenv, and in rpcenv you can see how to use the method
Privatedef getrpcenvfactory (conf:sparkconf): rpcenvfactory={//Add More RPCENV implementations hereVal rpcenvnames =map ("Akka", "Org.apache.spark.rpc.akka.AkkaRpcEnvFactory") Val Rpcenvname= Conf.get ("Spark.rpc", "Akka") Val Rpcenvfactoryclassname=Rpcenvnames.getorelse (rpcenvname.tolowercase, Rpcenvname) utils.classforname (rpcenvfactoryclassname). Newinstance (). Asinstanceof[rpcenvfactory]}
At present SPARK.RPC only Akka realization, if feel akka performance is bad also can implement an RPC framework. 4:RPCENV Note: This is an RPC environment where all rpcendpont need to be registered to receive messages in this object, and a name must be specified when registering. RPCENV will handle messages sent from the RPCENDPONTREF and Remote nodes (this logic is not visible in the interface) and sent to the appropriate endpoint processing, using Rpccallcontext for the received exception. See rpcenv like Akka in the Actorsystem object, all actors and acotorred belong to it, at the same time there is a root address, all rpcenv have registered Rpcendpoint method, there is an address return to the root of the method, Rpcenv There are several ways to get rpcendpointref, here endpoint registered name will become Rpcendpoint address, you can see the Uriof method, there are methods of stopping and shutting down. Rpcenv deserialize do not understand the specific usage, RPCENDPIONTREF can only use rpcenv decoding, when the object containing the RPCENDPOINTREF decoding, decoding code will be wrapped by the method 5:rpcenvconfig is used to build rpcenv configuration object, a rpcenv need host,port,name, with sparkconf,securitymanangerhost,port,name structure into the next akka:// Host:port/name is roughly so 6:rpcendpoint an endpoint of an inter-process call, when a message arrives, the method call order is OnStart, receive, onstop its life cycle is constructor Receive*, OnStop. Of course, there are other methods, which are the trigger method 7:rpcendpointref a remote Rpcendpoint reference, through which you can send a message to the remote Rpcendpoint, Can be a synchronization can be asynchronous, it maps an address,
pasted from:http://www.cnblogs.com/gaoxing/p/4805943.html
From Wiznote
The learning of Spark's rpct module