What is Hadoop archives?
Hadoop Archives is a special file format. A Hadoop archive corresponds to a file system directory. The name of the Hadoop archive extension is *.har. The Hadoop archive contains metadata (in the form of _index and _MASTERINDX) and data (part-*) files. The _index file contains file name and location information for files in the archive.
How do I create archive?
Usage: Hadoop archive-archivename name <src>* <dest>
The-archivename option specifies the name of the archive you want to create. Like Foo.har. The archive name extension should be *.har. The input is the path name of the file system, and the path name is the same as the usual expression. The created archive are saved to the target directory. Note that creating archives is a map/reduce job. You should run this command on the map reduce cluster. Here is an example:
Hadoop archive-archivename foo.har/user/hadoop/dir1/user/hadoop/dir2/user/zoo/
In the above example,/user/hadoop/dir1 and/user/hadoop/dir2 are archived to the file system directory-/user/zoo/foo.har. When you create a archive, the source file is not changed or deleted.
How do I view the files in archives?
Archive as a file system layer exposed to the outside world. So all FS shell commands can be run on archive, but use a different URI. In addition, archive is immutable. So renaming, deleting, and creating will return an error. The URI of the Hadoop archives is
Har://scheme-hostname:port/archivepath/fileinarchive
If Scheme-hostname is not provided, it uses the default file system. In this case the URI is this form
Har:///archivepath/fileinarchive
This is an example of a archive. Archive input is/dir. This dir directory contains file Filea,fileb. The order to file/dir into/user/hadoop/foo.bar is
Hadoop archive-archivename Foo.har/dir/user/hadoop
Gets the list of files in the created archive, using the command
Hadoop DFS-LSR Har:///user/hadoop/foo.har
To view the commands for Filea files in archive-
Hadoop Dfs-cat Har:///user/hadoop/foo.har/dir/filea