Small file solutions for Hadoop

Source: Internet
Author: User
Tags hadoop fs

Small files refer to files that are smaller than the block size of HDFs (the default 64M). Any file, directory, and block, in HDFs, is represented as an object stored in Namenode memory, with each object occupying 150bytes of memory space. So, if there are 10million (10 million) files, each file corresponding to a block, then will consume namenode3g memory to save these blocks of information, if the size of a little more, then it will exceed the current computer hardware can meet the limit.

In the same size, the more small files, the greater the memory pressure on the Namenode, so HDFs is not suitable for storing small files.

Solution:

The application controls itself, such as merging files

FinalPath PATH =NewPath ("/combinedfile"); FinalFsdataoutputstream Create =fs.create (path); FinalFile dir =NewFile ("C:\\windows\\system32\\drivers\\etc");  for(File fileName:dir.listFiles ()) {System.out.println (Filename.getabsolutepath ()); FinalFileInputStream FileInputStream =NewFileInputStream (Filename.getabsolutepath ()); FinalList<string> ReadLines =Ioutils.readlines (FileInputStream);  for(String line:readlines) {create.write (Line.getbytes ());        } fileinputstream.close (); } create.close ();

Hadoop Archive

The Hadoop Archives (HAR files) was introduced in version 0.18, 0, to alleviate the problem of large numbers of small files consuming namenode memory. The Har file works by building a hierarchical file system on HDFs. A har file is created with the archive command of Hadoop, and this command actually runs a mapreduce task to package small files into har. For the client side, using the Har file has no effect. All original files (using Har://url). But the number of files inside it is reduced in the HDFs side. It is no more efficient to read a file through Har than to read the file directly from HDFs, and it may actually be slightly inefficient, because access to each Har file requires the reading of the two-layer index file and the data of the file itself. And although the Har file can be used as input to the MapReduce job, there is no special way for maps to treat the files packaged in the Har file as an HDFs file. Create a file Hadoop archive-archivename xxx.har-p  /src  /dest View internal structure Hadoop fs-lsr/dest/xxx.har view content Hadoop FS-LSR har:/ Dest/xxx.har

  

Sequence File/map File

Merging small files, such as the Compact of the HBase section

Combinefileinputformat

Small file solutions for Hadoop

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.