Small files refer to files that are smaller than the block size of HDFs (the default 64M). Any file, directory, and block, in HDFs, is represented as an object stored in Namenode memory, with each object occupying 150bytes of memory space. So, if there are 10million (10 million) files, each file corresponding to a block, then will consume namenode3g memory to save these blocks of information, if the size of a little more, then it will exceed the current computer hardware can meet the limit.
In the same size, the more small files, the greater the memory pressure on the Namenode, so HDFs is not suitable for storing small files.
Solution:
The application controls itself, such as merging files
FinalPath PATH =NewPath ("/combinedfile"); FinalFsdataoutputstream Create =fs.create (path); FinalFile dir =NewFile ("C:\\windows\\system32\\drivers\\etc"); for(File fileName:dir.listFiles ()) {System.out.println (Filename.getabsolutepath ()); FinalFileInputStream FileInputStream =NewFileInputStream (Filename.getabsolutepath ()); FinalList<string> ReadLines =Ioutils.readlines (FileInputStream); for(String line:readlines) {create.write (Line.getbytes ()); } fileinputstream.close (); } create.close ();
Hadoop Archive
The Hadoop Archives (HAR files) was introduced in version 0.18, 0, to alleviate the problem of large numbers of small files consuming namenode memory. The Har file works by building a hierarchical file system on HDFs. A har file is created with the archive command of Hadoop, and this command actually runs a mapreduce task to package small files into har. For the client side, using the Har file has no effect. All original files (using Har://url). But the number of files inside it is reduced in the HDFs side. It is no more efficient to read a file through Har than to read the file directly from HDFs, and it may actually be slightly inefficient, because access to each Har file requires the reading of the two-layer index file and the data of the file itself. And although the Har file can be used as input to the MapReduce job, there is no special way for maps to treat the files packaged in the Har file as an HDFs file. Create a file Hadoop archive-archivename xxx.har-p /src /dest View internal structure Hadoop fs-lsr/dest/xxx.har view content Hadoop FS-LSR har:/ Dest/xxx.har
Sequence File/map File
Merging small files, such as the Compact of the HBase section
Combinefileinputformat
Small file solutions for Hadoop