Hadoop讀書筆記2

來源:互聯網
上載者:User

Chapter 4 Hadoop I/O
1) Integrity
HDFS transparently checksums all data written to it and by default verifies checksums when reading data.The default is 512 bytes, and because a CRC-32 checksum is 4 bytes long, the storage overhead is less than 1%. Datanodes are responsible for verifying the data they receive before storing the data and its checksum. Each datanode keeps a persistent log of checksum verifications. Each datanode runs a DataBlockScanner in a background thread that periodically verifies all the blocks stored on the datanode.

FileSystem fs = new RawLocalFileSystem();// don't checksum
FileSystem checksummedFs = new ChecksumFileSystem(rawFs);//do checksum

2) Compression
File compression brings two major benefits: it reduces the space needed to store files,and it speeds up data transfer across the network or to or from disk.A codec is the implementation of a compression-decompression algorithm.
To compress data being written to an output stream, use the createOutput Stream(OutputStream out)method to create a CompressionOutputStreamto which you write your uncompressed data to have it written in compressed form to the underlying stream. Conversely, to decompress data being read from an input stream, call createInputStream(InputStream in)to obtain a CompressionInputStream, which allows you to read uncompressed data from the underlying stream.
For performance, it is preferable to use a native library for compression and decompression.

If you are using a native library and you are doing a lot of compression or decompression in your application, consider using CodecPool, which allows you to reuse compressors and decompressors, thereby amortizing the cost of creating these objects.

When considering how to compress data that will be processed by MapReduce, it is important to understand whether the compression format supports splitting.

3) Serialization
Serializationis the process of turning structured objects into a byte stream for transmission over a network or for writing to persistent storage. Deserializationis the reverse process of turning a byte stream back into a series of structured objects.
Serialization appears in two quite distinct areas of distributed data processing: for interprocess communication and for persistent storage.

Hadoop uses its own serialization format, Writables, which is certainly compact and fast, but not so easy to extend or use from languages other than Java.

4) Serialization Frameworks
Although most MapReduce programs use Writablekey and value types, this isn’t mandated by the MapReduce API.

Apache Avro is a language-neutral data serialization system. The project was created to address the major downside of Hadoop Writables: lack of language portability. Having a data format that can be processed by many languages (currently C, C++, C#, Java, PHP, Python, and Ruby) makes it easier to share datasets with a wider audience than one tied to a single language.


相關文章

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.