Hadoop version 1.2.1
Jdk1.7.0
Example 3-1: Use the urlstreamhandler instance to display files of the hadoop File System in standard output mode
hadoop fs -mkdir input
Create two files, file1, file2, and file1, as Hello world, and file2 as Hello hadoop, and then upload the files to the input file. The specific method is as follows: hadoop cluster (Phase 1) in the _ wordcount running details section 2.1, you can see the preparation work.
The complete code is as follows:
1 import org.apache.hadoop.fs.FsUrlStreamHandlerFactory; 2 import org.apache.hadoop.io.IOUtils; 3 import java.net.URL; 4 import java.io.InputStream; 5 6 public class URLCat{ 7 static { 8 URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory()); 9 }10 public static void main(String[] args) throws Exception {11 InputStream in = null;12 try{13 in = new URL(args[0]).openStream();14 IOUtils.copyBytes(in,System.out,4096,false);15 }16 finally{17 IOUtils.closeStream(in);18 }19 }20 }
Compile and generate a class file, package it into a jar file, and compile and run the hadoop example wordcount on the [hadoop] command line for details.
Then use the command
hadoop jar URLCat.jar URLCat hdfs://localhost:9000/usr/hadoop/input/file1
HDFS: // localhost: 9000 is the name of the HDFS file system and has settings in the conf/core-site.xml
Running result
[email protected] ~/hadoop-1.2.1/classes $ hadoop jar URLCat.jar URLCat hdfs://localhost:9000/user/hadoop/input/file1hello world
Example 3-2: directly use filesystem to display files in the hadoop File System in the standard output format
Complete code
1 import org.apache.hadoop.conf.Configuration; 2 import org.apache.hadoop.fs.FileSystem; 3 import org.apache.hadoop.fs.Path; 4 import java.net.URI; 5 import java.net.URL; 6 import java.io.InputStream; 7 import org.apache.hadoop.io.IOUtils; 8 9 public class FileSystemCat {10 public static void main(String[] args) throws Exception {11 String uri = args[0];12 Configuration conf = new Configuration();13 FileSystem fs = FileSystem.get(URI.create(uri), conf);14 InputStream in = null;15 try {16 in = fs.open(new Path(uri));17 IOUtils.copyBytes(in, System.out, 4096, false);18 } finally {19 IOUtils.closeStream(in);20 }21 }22 }
Compile, package, and run instances in the same way
hadoop jar FileSystemCat.jar FileSystemCat hdfs:locahost:9000/user/hadoop/input/file2
Result Display
[email protected] ~/hadoop-1.2.1/classes $ hadoop jar FileSystemCat.jar FileSystemCat hdfs://localhost:9000/user/hadoop/input/file1hello hadoop
This article is based on the knowledge sharing signature-non-commercial use of the 3.0 License Agreement. You are welcome to repost and interpret this article, but you must keep the name of this article Lin Yu flying. If you need advice, please send me a mail