after introducing Org.apache.hadoop.io, we began to analyze Org.apache.hadoop.rpc. RPC takes client/server mode. The requestor is a
client, and the service provider is a server. When we discuss HDFs, the communication can occur in:
between Client-namenode, where NameNode is the server
between Client-datanode, where DataNode is the server
between Datanode-namenode, where NameNode is the server
between Datanode-datenode, one of the Datenode is the server and the other is the client
if we consider the map/reduce of Hadoop, the communication between these systems is more complicated. To address these client/server communications, Hadoop
An RPC framework was introduced. The RPC framework leverages the ability of Java to reflect, avoiding some RPC solutions that need to be based on some interface language (such as CORBA IDL)
issues with generating stubs and frames. However, the RPC framework requires that the parameters and return results of the call must be of the basic Java type, the String and the writable interface
the current class, and an array of elements of the type above. At the same time, the interface method should only throw IOException exceptions.
since RPC, of course, there are clients and servers, of course, ORG.APACHE.HADOOP.RPC also has the class client and Class Server. But Class Server is a
an abstract class, class RPC encapsulates the server, using reflection, to open an object's methods to become the server in RPC.
is the class diagram of the Org.apache.hadoop.rpc. 650) this.width=650; "id=" aimg_878 "src=" http://bbs.superwu.cn/data/attachment/forum/201505/11/ 152729pqkc2isbu09kksw5.bmp "class=" Zoom "width=" "alt=" 152729pqkc2isbu09kksw5.bmp "/>"
For more highlights, please follow: http://bbs.superwu.cn
Follow the Superman Academy: Bj-crxy
Sweep a QR code: 650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/6C/B7/wKiom1VQcbeAiCIVAADAdZasjL0304.jpg "title= "Superman Academy. jpg" alt= "wkiom1vqcbeaicivaadadzasjl0304.jpg"/>
Hadoop Source Code Analysis (v) RPC framework