First, the basic concept of RPC basic concept 1.1 RPC
RPC, remote procdurecall, Chinese name: Remoting procedure calls ;
(1) It allows a computer program to remotely invoke the subroutine of another computer without having to care about the underlying network communication details, which is transparent to us. Therefore, it is often used in distributed network communication.
The RPC protocol assumes that some transport protocols exist, such as TCP or UDP, to carry information data between communication programs. In the OSI network communication model, RPC spans the transport and application tiers. RPC makes it easier to develop applications that include distributed, multi-program networks.
(2) The process of Hadoop interaction is through RPC, such as Namenode and Datanode Direct, between Jobtracker and Tasktracker.
Therefore, it can be said thatthe operation of Hadoop is built on the basis of RPC .
Notable features of 1.2 RPC
(1) transparency : remote invocation of programs on other machines is like calling local methods to the user;
(2) High performance: RPC server can concurrently handle multiple requests from the client;
(3) Controllability: An RPC framework-rmi has been provided in the JDK, but the PRC framework is too heavyweight and less controllable, so Hadoop RPC implements a custom PRC framework.
1.3 Basic process of RPC
(1) RPC adopts C/s mode;
(2) The client sends a request message with parameters to the server;
(3) After receiving this request, the server calls the corresponding program according to the parameters sent, then sends its calculated results to the client side;
(4) The client side will continue to operate after receiving the result;
1.4 The RPC mechanism in Hadoop
Like other RPC frameworks, Hadoop RPC is divided into four parts:
(1) serialization layer: Clent and server-side communication transfer information using the serialized class provided in Hadoop or the custom writable type;
(2) Function call Layer: Hadoop RPC implements function call through dynamic Proxy and Java reflection;
(3) Network Transport layer: Hadoop RPC uses TCP/IP-based socket mechanism;
(4) server-side framework layer: RPC Server uses Java NIO and the event-driven I/O model to improve RPC server concurrency processing power;
Hadoop RPC is widely used throughout Hadoop , and communication between the Client, DataNode, and Namenode depends on it. For example, when we operate HDFS normally, we use the FileSystem class, which has a Dfsclient object, which is responsible for dealing with Namenode. At run time, Dfsclient creates a Namenode proxy locally, and then it operates on the proxy, and the proxy is remotely called to the Namenode method via the network, and can also return a value.
1.5 Hadoop RPC Design Technology
(1) dynamic agent
About : dynamic agents can provide access to another object while hiding the exact facts of the actual object, and the proxy object hides the actual object from the customer. Support for dynamic proxies is currently available in the Java development package, but only the implementation of interfaces is now supported .
(2) Reflection--Dynamic loading class
(3) serialization
(4) non-blocking asynchronous IO(NIO)
For the Java NIO principle, please refer to read: http://weixiaolu.iteye.com/blog/1479656
Second, how to use RPC2.1 Hadoop RPC interface provided externally
Hadoop RPC provides two main external interfaces (see class Org.apache.hadoop.ipc.RPC), respectively:
(1) public static <T> Protocolproxy <T> getproxy/waitforproxy (...)
Constructs a client proxy object that implements a protocol that sends RPC requests to the server.
(2) public static Server RPC. Builder (Configuration). Build ()
Constructs a server object for a protocol (actually a Java interface) instance that handles requests sent by the client.
2.2 Four steps to using Hadoop RPC
(1) Define RPC protocol
RPC protocol is a communication interface between client and server, it defines the service interface provided by Server side.
(2) Implementing RPC protocol
The Hadoop RPC protocol is typically a Java interface that users need to implement.
(3) Constructing and starting the RPC SERVER
Constructs an RPC server directly using static Class builder and invokes the function start () to launch the server.
(4) Construct the RPC client and send the request
Using the static method GetProxy constructs the client proxy object and invokes the remote-side method directly through the proxy object.
Iii. RPC Application Example 3.1 defining RPC protocol
As shown below, we define a Iproxyprotocol communication interface that declares an add () method.
Public interface Iproxyprotocol extends Versionedprotocol { static final long version = 23234L;//version number, by default, RPC with different version number The client and server cannot communicate with each other int Add (int number1,int number2);}
It is important to note that:
(1) All custom RPC interfaces in Hadoop need to inherit the Versionedprotocol interface , which describes the version information of the protocol.
(2) By default, different version numbers of RPC client and server cannot communicate with each other, so the client and service side are identified by the version number.
3.2 Implementing the RPC protocol
The Hadoop RPC protocol is typically a Java interface that users need to implement. A simple implementation of the Iproxyprotocol interface is as follows:
public class Myproxy implements Iproxyprotocol {public int Add (int number1,int number2) { System.out.println (" I've been called! "); int result = Number1+number2; return result; } Public long Getprotocolversion (String protocol, long ClientVersion) throws IOException { System.out.println (" myproxy.protocolversion= "+ iproxyprotocol.version); Note: The version number returned here should be consistent with the version number provided by the client return iproxyprotocol.version;} }
The Add method implemented here is simple, which is an addition operation. To see the effect, here's an output from the console: "I've been called!" ”
3.3 Constructing the RPC server and starting the service
Here you get the server object by using RPC's static method Getserver, as shown in the following code:
public class MyServer {public static int PORT = 5432; public static String IPAddress = "127.0.0.1"; public static void Main (string[] args) throws Exception { myproxy proxy = new myproxy (); Final Server server = Rpc.getserver (proxy, IPAddress, PORT, New Configuration ()); Server.start (); }}
The core of this code is the 5th line of the Rpc.getserver method, the method has four parameters, the first parameter is called the Java object, the second parameter is the address of the server, the third parameter is the port of the server. After you obtain the server object, start the server. In this way, the server listens to the client's request on the specified port. So far, the server is listening and waits for client requests to arrive.
3.4 Constructing the RPC client and making a request
Here, using the static method GetProxy or Waitforproxy constructs the client proxy object, invoking the remote end method directly through the proxy object, as follows:
public class MyClient {public static void Main (string[] args) { inetsocketaddress inetsocketaddress = new INETSOC Ketaddress ( myserver.ipaddress, myserver.port); try { //Note: The incoming version number needs to be consistent with the agent Iproxyprotocol proxy = (iproxyprotocol) rpc.waitforproxy ( Iproxyprotocol.class, Iproxyprotocol.version, inetsocketaddress, new Configuration ()); int result = Proxy. ADD (ten); System.out.println ("10+25=" + result); Rpc.stopproxy (proxy); } catch (IOException e) { //TODO auto-generated catch block e.printstacktrace (); } }}
The core of the above code is Rpc.waitforproxy (), the method has four parameters, the first parameter is called the interface class, the second is the client version number, and the third is the server address. The proxy object returned is the proxy for the service-side object, which is implemented internally using Java.lang.Proxy.
After four steps, we built a very efficient client-server network model using Hadoop RPC.
3.5 Viewing Run Results
(1) Start the server and start listening for client requests
(2) Start the client and start sending requests to the server
(3) View server status, whether it is called
SUMMARY: from the RPC call above, it can be seen that the method of the business class invoked on the client is defined in the interface of the business class. The interface implements the versionedprotocal interface .
(4) Now we execute the JPS command at the command line to view the output information, which appears as shown:
You can see a Java process, which is "MyServer", which is the server-side class MyServer of the RPC we just ran. As a result, it can be predicted that when we built the Hadoop environment, we also executed the command to determine whether the relevant process of Hadoop started.
SUMMARY: Then it can be judged that the5 Java processes generated by Hadoop should also be the service side of RPC .
Below we observe the source code of Namenode, as shown, you can see that Namenode does create the server side of the RPC.
private void Initialize (Configuration conf) throws IOException {...//create RPC server Inetsocketaddre SS dnsocketaddr = getservicerpcserveraddress (conf); if (dnsocketaddr! = null) {int servicehandlercount = Conf.getint (dfsconfigkeys.dfs_namenode_service_handler_c Ount_key, Dfsconfigkeys.dfs_namenode_service_handler_count_default); This.servicerpcserver = Rpc.getserver (This, Dnsocketaddr.gethostname (), Dnsocketaddr.getport (), Servicehandlercount, False, Conf, Namesystem.getdelegationtokensecretmanager ()); This.servicerpcaddress = This.serviceRpcServer.getListenerAddress (); Setrpcserviceserveraddress (conf); } this.server = rpc.getserver (This, Socaddr.gethostname (), Socaddr.getport (), Handlercount, FAL SE, conf, namesystem. Getdelegationtokensecretmanager ()); ......}
Welcome you to discuss the exchange: qq:747861092
QQ Group:163354117 (group name:codeforfuture)
Reap a little bit every day------the use of the Hadoop RPC mechanism