Hadoop issues and workarounds for handling large numbers of small files

Source: Internet
Author: User
Tags split
Small files refer to files that are smaller than the block size of HDFs (the default 64M). If you store small files in HDFs, there will certainly be a lot of such small files in HDFs (otherwise you won't use Hadoop).
The problem with HDFs is the inability to handle large numbers of small files efficiently.

Any file, directory, and block, in HDFs, will be represented as an object stored in Namenode's memory, without an object occupying the bytes memory Space。 So, if there are 10million files,
No file corresponds to a block, then it consumes namenode 3G of memory to hold the block's information. If the size is larger, then it will exceed the current level of computer hardware can meet the limit.

Not only that, HDFs does not exist for efficient processing of large numbers of small files. It is designed primarily for streaming access to large files. Reading small files often results in a large number of
Datanode to Datanode seeks and hopping to retrieve files, and this is a very inefficient way to access.


Problems with a large number of small files in MapReduce

The Map tasks are typically input per block (using Fileinputformat by default). If the file is very small and has a large number of such small files, then each map task only processes very small input data,
And will generate a lot of map tasks, each map task will consume a certain amount of bookkeeping resources. Compare a 1GB file, the default block size is 64M, and 1Gb files, not a file 100KB,
Then the latter does not have a small file using a map task, then the job will be 10 times times or even a hundredfold slower than the former.

There are some features in Hadoop that can be used to mitigate this problem: You can allow task reuse in a JVM to support running multiple map tasks in one JVM to reduce the startup consumption of some JVMs
(by setting the Mapred.job.reuse.jvm.num.tasks property, the default is 1,-1 is unlimited). Another approach is to use Multifileinputsplit, which enables a map to handle multiple split.


Why a lot of small files are generated.

There are at least two cases in which a large number of small files are generated

1. These small files are the pieces of a large logical file. Since HDFs only recently supported the append of files, the way it was previously used to add content to unbounde files, such as a log file, is by using these DataWritten in HDFs in many chunks ways.
2. The file itself is very small. For example, many small picture files. Each picture is a separate file. And there is no very effective way to merge these files into one large file

These two situations require a different SolveWay. For the first case, the file is made up of a lot of records, so it can be solved by unfaithful the call to HDFs's Sync () method (combined with the Append method). Alternatively, you can use one of the programs to specifically merge these small files (see Nathan Marz's post about a tool called the Consolidator which does exactly this).

For the second case, some form of container is needed to somehow group the file. Hadoop offers a number of options:

HAR Files

The Hadoop Archives (HAR files) was introduced in version 0.18, 0, to alleviate the problem of large numbers of small files consuming namenode memory. The Har file works by building a hierarchical file system on HDFs. A har file is created with the archive command of Hadoop, and this command actually runs a mapreduce task to package small files into har. For the client side, using the Har file has no effect. All original files are visible && accessible (using Har://url). But the number of files inside it is reduced in the HDFs side.

It is no more efficient to read a file through Har than to read the file directly from HDFs, and it may actually be slightly inefficient, because access to each Har file requires the reading of the two-layer index file and the reading of the file's own data (see figure above). And although the Har file can be used as input to the MapReduce job, there is no special way for maps to treat the files packaged in the Har file as an HDFs file. Consider creating an input format that leverages the advantages of the Har file to improve the efficiency of mapreduce, but no one has yet made this input format. It is important to note that Multifileinputsplit, even in the HADOOP-4565 of improvements (choose files in a split that is node local), but always still need seek per small file.

Sequence Files

Usually the "Small files problem" response would be: use Sequencefile. This approach is to say, use filename as key, and file contents as value. This approach works well in practice. Back to 10,000 100KB files, you can write a program to write these small files to a separate sequencefile, then you can be in a streaming fashion (directly or using MapReduce) To use this sequencefile. Not only that, Sequencefiles is also splittable, so mapreduce can break the them into chunks, and be independently handled separately. Unlike Har, this approach also supports compression. Block compression is the best choice in many cases, as it compresses multiple records together rather than a record one compression.

Converting many of the existing small files into a sequencefiles may be slow. However, it is entirely possible to create a series of sequencefiles in a parallel way. (Stuart Sierra have written a very useful post about converting a tar file to a sequencefile-tools like this is very use FUL). Further, if it is possible it is best to design your own data pipeline to write data directly to a sequencefile.


Mapfile is a sorted sequencefile that has been added to the index used for the search key.



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.