Hadoop讀書筆記2

來源:互聯網
上載者:User

Chapter 4 Hadoop I/O
1) Integrity
HDFS transparently checksums all data written to it and by default verifies checksums when reading data.The default is 512 bytes, and because a CRC-32 checksum is 4 bytes long, the storage overhead is less than 1%. Datanodes are responsible for verifying the data they receive before storing the data and its checksum. Each datanode keeps a persistent log of checksum verifications. Each datanode runs a DataBlockScanner in a background thread that periodically verifies all the blocks stored on the datanode.

FileSystem fs = new RawLocalFileSystem();// don't checksum
FileSystem checksummedFs = new ChecksumFileSystem(rawFs);//do checksum

2) Compression
File compression brings two major benefits: it reduces the space needed to store files,and it speeds up data transfer across the network or to or from disk.A codec is the implementation of a compression-decompression algorithm.
To compress data being written to an output stream, use the createOutput Stream(OutputStream out)method to create a CompressionOutputStreamto which you write your uncompressed data to have it written in compressed form to the underlying stream. Conversely, to decompress data being read from an input stream, call createInputStream(InputStream in)to obtain a CompressionInputStream, which allows you to read uncompressed data from the underlying stream.
For performance, it is preferable to use a native library for compression and decompression.

If you are using a native library and you are doing a lot of compression or decompression in your application, consider using CodecPool, which allows you to reuse compressors and decompressors, thereby amortizing the cost of creating these objects.

When considering how to compress data that will be processed by MapReduce, it is important to understand whether the compression format supports splitting.

3) Serialization
Serializationis the process of turning structured objects into a byte stream for transmission over a network or for writing to persistent storage. Deserializationis the reverse process of turning a byte stream back into a series of structured objects.
Serialization appears in two quite distinct areas of distributed data processing: for interprocess communication and for persistent storage.

Hadoop uses its own serialization format, Writables, which is certainly compact and fast, but not so easy to extend or use from languages other than Java.

4) Serialization Frameworks
Although most MapReduce programs use Writablekey and value types, this isn’t mandated by the MapReduce API.

Apache Avro is a language-neutral data serialization system. The project was created to address the major downside of Hadoop Writables: lack of language portability. Having a data format that can be processed by many languages (currently C, C++, C#, Java, PHP, Python, and Ruby) makes it easier to share datasets with a wider audience than one tied to a single language.

相關文章
阿里云产品大规模降价
  • 最高幅度達59%,平均降幅23%
  • 核心產品降價
  • 多地區降價
undefined. /
透過 Discord 與我們聯繫
  • 安全、匿名的群聊,不受干擾
  • 隨時了解活動、活動、新產品等訊息
  • 支持您的所有問題
undefined. /
免費試用
  • 開啟從ECS到大數據的免費試用之旅
  • 只需三步 輕鬆上雲
  • 免費試用ECS t5 1C1G
undefined. /

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.