Hadoop technology insider HDFS-Notes 6 RPC

Source: Internet
Author: User
1.1 hadoop remote process call

1. remote interface call (versionedprotocol interface must be implemented)

There is a method in it. During IPC communication, the version numbers of the client and server interfaces are compared. Must be consistent.

Package RPC; import Org. apache. hadoop. IPC. versionedprotocol; public interface mybizable extends versionedprotocol {// must have a public static final long version = 100l; // The actual call method public abstract string Hello (string name );}

 

2. Define the implementation class of remote objects

Package RPC; import Java. io. ioexception; public class mybiz implements mybizable {// obtain the version number @ overridepublic long getprotocolversion (string protocol, long clientversion) throws ioexception {// todo auto-generated method stubreturn version ;} @ overridepublic string Hello (string name) {system. out. println ("I was called"); Return "Hello:" + name ;}}

 

3. Build servers

Package RPC;

 

Import org. Apache. hadoop. conf. configuration;

Import org. Apache. hadoop. IPC. RPC;

Import org. Apache. hadoop. IPC. rpc. server;

 

Public class myserver {

Public static final string server_address = "localhost ";

Public static final int server_port = 12344;

Public static void main (string [] ARGs) throws exception {

// Public static server getserver (final object instance, final string bindaddress, final int port, configuration conf)

/** Construct an RPC server.

* @ Param instance the instance whose methods will be called client calls remote interface

* @ Param bindaddress the address to bind on to listen for connection listening connection address

* @ Param port the port to listen for connections on port

* @ Param conf configuration file

*/

Final server Server = rpc. getserver (New mybiz (), server_address, server_port, new configuration ());

Server. Start ();

}

}

4. Client implementation

Package RPC; import Java. io. ioexception; import java.net. inetsocketaddress; import Org. apache. hadoop. conf. configuration; import Org. apache. hadoop. IPC. RPC; import Org. apache. hadoop. IPC. versionedprotocol; public class myclient {public static void main (string [] ARGs) throws exception {/** construct a client-side proxy object that implements the named protocol, * talking to a server at the named address. */FINAL mybizable proxy = (mybizable) RPC. waitforproxy (mybizable. class
, Mybizable. Version
, New inetsocketaddress (myserver. server_address, myserver. server_port)
, New configuration (); // normal test string result = proxy. hello ("hello"); system. out. println (result); RPC. stopproxy (proxy );}}

 

Principle: (do not pay attention to it for the time being. You can review it when necessary)

Summary: hadoop's namenode, secondarynamenode, datanode, jobtrack and other processes all implement remote call interfaces. That is to say, each of them is a server waiting for the client to call. They act as clients and servers.

To start hadoop is to start the RPC server.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.