1: Using java.net. url does not need to be converted. This is the most important one and has a defect.
Public class urlcat {/*** @ Param ARGs */static {URL. seturlstreamhandlerfactory (New fsurlstreamhandlerfactory (); // convert a URL in HDFS format to a URL that can be recognized by the system} public static void main (string [] ARGs) throws malformedurlexception, ioexception {// todo auto-generated method stubinputstream in = NULL; try {In = new URL (ARGs [0]). openstream (); ioutils. copy (in, system. out);} finally {ioutils. closequietly (in );}}}2. Use the filesystem API and the filesystem API provided by hadoop to convert HDFS to the local system:
// CC filesystemcat displays files from a hadoop filesystem on standard output by using the filesystem directlyimport Java. io. inputstream; import java.net. uri; import Org. apache. hadoop. conf. configuration; import Org. apache. hadoop. FS. filesystem; import Org. apache. hadoop. FS. path; import Org. apache. hadoop. io. ioutils; // VV filesystemcatpublic class filesystemcat {public static void main (string [] ARGs) throws exception {string uri = ARGs [0]; configuration conf = new configuration (); filesystem FS = filesystem. get (URI. create (URI), conf); // set the filesystem path to HDFS: // localhost: 9000. The default HDFS path from the URI is file: \\\ system. out. println ("URI. create (URI) = "+ Uri. create (URI); // filesystem FS = filesystem. get (CONF); this is not a good system. out. println (FS. geturi (). tostring (); // You Can See filesystem. get extracts the URI inputstream in = NULL of the entire file system; try {In = FS. open (New Path (ARGs [0]); // you can use hdfs uri to read ioutils. copybytes (in, system. out, 4096, false);} finally {ioutils. closestream (in); // close it }}// ^ filesystemcat