The client needs to specify the NS name, node configuration, Configuredfailoverproxyprovider and other information.
code example:
Package Cn.itacst.hadoop.hdfs;import Java.io.fileinputstream;import java.io.inputstream;import Java.io.outputstream;import Java.net.uri;import Org.apache.hadoop.conf.configuration;import Org.apache.hadoop.fs.filesystem;import Org.apache.hadoop.fs.path;import org.apache.hadoop.io.IOUtils; Public classHdfs_ha { Public Static voidMain (string[] args) throws Exception {Configuration conf=NewConfiguration (); Conf.Set("Fs.defaultfs","hdfs://ns1"); Conf.Set("dfs.nameservices","ns1"); Conf.Set("dfs.ha.namenodes.ns1","nn1,nn2"); Conf.Set("dfs.namenode.rpc-address.ns1.nn1","itcast01:9000"); Conf.Set("dfs.namenode.rpc-address.ns1.nn2","itcast02:9000"); //Conf.setboolean (name, value);Conf.Set("dfs.client.failover.proxy.provider.ns1","Org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"); FileSystem FS= FileSystem.Get(NewURI ("hdfs://ns1"), Conf,"Hadoop"); InputStreaminch=NewFileInputStream ("C://eclipse.rar"); OutputStream out= Fs.create (NewPath ("/eclipse")); Ioutils.copybytes (inch, out,4096,true); }}
accessing HDFS JAVA API Client under "Hadoop" HA scenario