When connecting to a Hadoop cluster through the Java API, if the cluster supports HA mode, it can be set up to automatically switch to the active master node as follows. Wherein, clustername can be arbitrarily specified, with the cluster configuration independent, Dfs.ha.namenodes.ClusterName can also arbitrarily specify the name, there are several master write a few, followed by the corresponding settings to add the master node address.
Private StaticString clustername ="nsstargate"; Private StaticFinal String Hadoop_url ="hdfs://"+clustername; Public StaticConfiguration conf; Static{conf=NewConfiguration (); Conf.Set("Fs.defaultfs", Hadoop_url); Conf.Set("dfs.nameservices", clustername); Conf.Set("dfs.ha.namenodes."+clustername,"nn1,nn2"); Conf.Set("dfs.namenode.rpc-address."+clustername+". Nn1","172.16.50.24:8020"); Conf.Set("dfs.namenode.rpc-address."+clustername+". Nn2","172.16.50.21:8020"); //Conf.setboolean (name, value);Conf.Set("Dfs.client.failover.proxy.provider."+ClusterName,"Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"); }
The code to upload the file to HDFs is as follows, as for other operations such as reading, you can refer to other articles on the network.
/** * Upload files to HDFs*/ Private Static voidUploadtohdfs () throws IOException {String localsrc="E:\\test\\article01.txt"; String DST="/user/test/article04.txt"; FileSystem FS= FileSystem.Get(Uri.create (Hadoop_url), conf); LongStart =NewDate (). GetTime (); /*InputStream in = new FileInputStream (LOCALSRC); InputStreamReader ISR = new InputStreamReader (in, "GBK"); OutputStream out = fs.create (new Path (HADOOP_URL+DST), true); Ioutils.copy (ISR, Out, "UTF8");*/ //This method is fasterFsdataoutputstream Outputstream=fs.create (NewPath (DST)); String filecontent= Fileutils.readfiletostring (NewFile (LOCALSRC),"GBK"); Outputstream.write (Filecontent.getbytes ()); Outputstream.close (); LongEnd =NewDate (). GetTime (); System. out. println ("Use :"+ (end-start)); }
Java API operation for Hadoop under HA mode