Hadoop FS: Use the widest range of surfaces to manipulate any file system.Hadoop DFS and HDFs DFS: can only operate on HDFs file system-related (including operations with local FS), which is already deprecated, typically using the latter.The following reference is from StackOverflowFollowing is the three commands which
Hadoop FS: Use the widest range of surfaces to manipulate any file system.Hadoop DFS and HDFs DFS: can only operate on HDFs file system-related (including operations with local FS), which is already deprecated, typically using the latter.The following reference is from StackOverflowFollowing is the three commands which
FS Shell
Calling the file system (FS) shell command should use the form of Bin/hadoop FS Cat
How to use: Hadoop fs-cat uri [uri ...]
The path specifies the contents of the file to be exported to stdout.
Example:
FS ShellThe call file system (FS) shell command should use the form bin/hadoop FS . All of the FS shell commands use URI paths as parameters. The URI format is scheme://authority/path . For the HDFs file system, Scheme is HDFs ,
Use bin/hadoop FS Scheme: // authority/path. For HDFS file systems, scheme isHDFSFor the local file system, scheme isFile. The scheme and authority parameters are optional. If not specified, the default scheme specified in the configuration will be used. An HDFS file or directory such/Parent/childCan be expressedHDFS: // namenode: namenodeport/parent/child, Or simpler/Parent/child(Assume that the default va
Hadoop FS: The widest range of users can operate any file system.
Hadoop DFS and HDFs dfs: only HDFs file system related (including operations with local FS) can be manipulated, the former has been deprecated, generally using the latter.
The following reference from StackOverflow
Following are the three
http://blog.csdn.net/pipisorry/article/details/51340838the difference between ' Hadoop DFS ' and ' Hadoop FS 'While exploring HDFs, I came across these II syntaxes for querying HDFs:> Hadoop DFS> Hadoop FSWhy we have both different syntaxes for a common purposeWhy are there
FS Shell
Use bin/hadoop FS
Cat
Usage:
hadoop fs -cat URI [URI …]
Output the content of the specified file in the path to stdout.
Example:
hadoop fs -cat hdfs://host1:port1/file1 hdfs
Hadoop uses HDFs to store HBase's data, and we can view the size of the HDFS using the following command. Hadoop fsck Hadoop fs-dus Hadoop fs-count-q
The above command may have permission problems in the HDFs, you can run the abov
Suppose you have a/user/hadoop/output directory on your HDFS cluster
There is the result of the job execution (multiple files are composed) part-000000,part-000001,part-000002
And then you want to put all the files together. You can use the command: Hadoop fs-getmerge/user/hadoop/output local_file
Then you can u
Org. apache. hadoop. fs-Seekable, org. apache. commons
I should have read BufferedFSInputStream first, but it implements the Seekable and PositionedReadable interfaces. Let's look at these two interfaces first and then it will be easier to understand.
1 package org. apache. hadoop. fs; 2 3 import java. io. *; 4 5/**
Recently, the data format stored in HDFS is incorrect because the data contains \ r \ n characters, which are not taken into account during program processing. Historical data is about one year old. You need to delete the wrong data or duplicate data to keep the correct data. Pig is used in the project for data processing, so I wrote a UDF Java class to filterFor the wrong data, save the wrong data and the correct data separately, and then write the schema and number of the following script stat
Run the Hadoop fs-ls command to display local Directory issues Problem reason: The default path for HDFS is not specified in the Hadoop configuration file Solution: There are two ways 1. Access Hadoop fs-ls hdfs://192.168.1.1:9000/using HDFs full path 2. Modify the c
There was a problem with Hadoop Fs-ls because of the violent shutdown:The cause of the problem is the following red box inside the thing, I thought that download a Conf.cloudera.yarn file from another node can solve the problem, found no Ah, then deleted.From another node SCP come over this file.Workaround:Scp-r/etc/hadoop/conf.cloudera.yarn [Email Protected]:/et
Recently, we have to add an alarm to the space usage and file node usage of HDFS. When the quota is exceeded, we need to send an alarm notification to prepare in advance.
[Sunwg] $ hadoop FS-count/sunwg 2 1 108 HDFS: // sunwg: 9000/sunwg
The first value 2 indicates the number of folders under/sunwg,
The second value, table 1, is the number of files in the current folder,
The third value 108 indicates th
Bulk Modify the delimiter of the file, you can use the FS and OFS commandsFs:field Separator, Field delimiterOfs:out of field Separator, output fields delimiterSuppose there is such a file file1.txt, which reads as follows:As you can see, the file1 delimiter is very long and consists of more than one space character, so we need to unify the delimiter first and enter the command:Awk-f "" ' {if ($1~/^16/) print $1,$2,$3,$4} ' file1.txt > File2.txtGenera
Hadoop Shell commands
Use bin/hadoop FS
1. cat
Description: outputs the content of the specified file in the path to stdout.
Usage: hadoop fs-cat URI [URI…]
Example:
hadoopfs-cathdfs://host1:port1/file1hdfs://host2:port2/fi
[jobMainClass] [jobArgs]
Killing a running JOB
Hadoop job-kill job_20100531_37_0053
More HADOOP commands
Hadoop
You can see the description of more commands:
Namenode-format the DFS filesystem
Secondarynamenode run the DFS secondary namenode
Namenode run the DFS namenode
Da
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.