Java API Connection HDFS HA
Copy Code code as follows:
public static void Main (string[] args) {
Configuration conf = new Configuration ();
Conf.set ("Fs.defaultfs", "Hdfs://hadoop2cluster");
Conf.set ("Dfs.nameservices", "Hadoop2cluster");
Conf.set ("Dfs.ha.namenodes.hadoop2cluster", "nn1,nn2");
Conf.set ("Dfs.namenode.rpc-address.hadoop2cluster.nn1", "10.0.1.165:8020");
Conf.set ("Dfs.namenode.rpc-address.hadoop2cluster.nn2", "10.0.1.166:8020");
Conf.set ("Dfs.client.failover.proxy.provider.hadoop2cluster", " Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider ");
FileSystem fs = null;
try {
FS = Filesystem.get (conf);
filestatus[] List = Fs.liststatus (new Path ("/"));
for (Filestatus file:list) {
System.out.println (File.getpath (). GetName ());
}
catch (IOException e) {
E.printstacktrace ();
} finally{
try {
Fs.close ();
catch (IOException e) {
E.printstacktrace ();
}
}
}
Java API Invoke MapReduce program
Copy Code code as follows:
string[] args = new string[24];
Args[0] = "/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar";
ARGS[1] = "WordCount";
ARGS[2] = "-D";
ARGS[3] = "yarn.resourcemanager.address=10.0.1.165:8032";
ARGS[4] = "-D";
ARGS[5] = "yarn.resourcemanager.scheduler.address=10.0.1.165:8030";
ARGS[6] = "-D";
ARGS[7] = "fs.defaultfs=hdfs://hadoop2cluster/";
ARGS[8] = "-D";
ARGS[9] = "Dfs.nameservices=hadoop2cluster";
ARGS[10] = "-D";
ARGS[11] = "DFS.HA.NAMENODES.HADOOP2CLUSTER=NN1,NN2";
ARGS[12] = "-D";
ARGS[13] = "dfs.namenode.rpc-address.hadoop2cluster.nn1=10.0.1.165:8020";
ARGS[14] = "-D";
ARGS[15] = "dfs.namenode.rpc-address.hadoop2cluster.nn2=10.0.1.166:8020";
ARGS[16] = "-D";
ARGS[17] = "dfs.client.failover.proxy.provider.hadoop2cluster= Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider ";
ARGS[18] = "-D";
ARGS[19] = "Fs.hdfs.impl=org.apache.hadoop.hdfs.distributedfilesystem";
ARGS[20] = "-D";
ARGS[21] = "Mapreduce.framework.name=yarn";
ARGS[22] = "/input";
ARGS[23] = "/out01";
Runjar.main (args);