* File operation
* View Catalog Files
* $ Hadoop DFS-LS/USER/CL
*
* Create file directory
* $ Hadoop dfs-mkdir/user/cl/temp
*
* Delete Files
* $ Hadoop dfs-rm/user/cl/temp/a.txt
*
* Delete all files in directory and directory
* $ Hadoop dfs-rmr/user/cl/temp
*
* Upload Files
* Upload a native/home/cl/local.txt to the/user/cl/temp directory in HDFs
* $ Hadoop dfs-put/home/cl/local.txt/user/cl/temp
*
* Download File
* Download the Hdfs.txt file in the/user/cl/temp directory in HDFs to the native/home/cl/
* $ Hadoop DFS-GET/USER/CL/TEMP/HDFS.TXT/HOME/CL
*
* View Files
* $ Hadoop dfs–cat/home/cl/hdfs.txt
*
* Job operation
* Submit a MapReduce job, all of Hadoop's mapreduce jobs are a jar package
* $ Hadoop jar <local-jar-file> <java-class>
* $ Hadoop jar Sandbox-mapred-0.0.20.jar Sandbox.mapred.wordcountjob/user/cl/input.dat/user/cl/outputdir
*
* Kill a running job
* Suppose job_id is: job_201207121738_0001
* $ Hadoop Job-kill job_201207121738_0001
Hadoop File command