In Python, the zlib module is used for data compression.

Source: Internet
Author: User

In Python, the zlib module is used for data compression.

In the Python standard module, multiple modules are used for data compression and decompression, such as zipfile, gzip, and bz2. I introduced the zipfile module last time. Today I will talk about the zlib module.
Zlib. compress (string [, level])
Zlib. decompress (string [, wbits [, bufsize])

Zlib. compress is used to compress streaming data. The string parameter specifies the data stream to be compressed, and the level parameter specifies the compression level. The value range is 1 to 9. The compression speed is inversely proportional to the compression rate. 1 indicates the fastest compression speed and the lowest compression rate. 9 indicates the slowest compression speed but the highest compression rate. Zlib. decompress is used to extract data. The string parameter specifies the data to be extracted. wbits and bufsize are used to set the system buffer size (window buffer) and output buffer size (output buffer) respectively ). The following example shows how to use these two methods:
 

# Coding = gbk import zlib, urllib fp = urllib. urlopen ('HTTP: // localhost/default.html ') str = fp. read () fp. close () # ---- compress the data stream. Str1 = zlib. compress (str, zlib. z_BEST_COMPRESSION) str2 = zlib. decompress (str1) print len (str) print len (str1) print len (str2) # ---- result #5783 #1531 #5783

We can also use Compress/Decompress objects to Compress/Decompress data. Zlib. compressobj ([level]) and zlib. decompress (string [, wbits [, bufsize]) Create Compress/Decompress object respectively. The usage of data compression and decompression through objects is very similar to zlib. compress and zlib. decompress described above. However, there is a difference between the two in data compression, which is mainly reflected in the case of a large amount of data operations. If we want to compress a very large data file (hundreds of MB), if we use zlib. to compress data, you must first read the data in the file to the memory and then compress the data. In this way, too much memory will be used. If the object is used for compression, there is no need to read all the data of the file at a time. You can first read a part of the data and compress it into the memory. After compression, the data is written to the file, then, read the data from other parts and compress the data. This loop repeats and only compresses the entire file. The following example demonstrates the differences:
 

# Coding = gbk import zlib, urllib fp = urllib. urlopen ('HTTP: // localhost/default.html ') # The accessed URL. Data = fp. read () fp. close () # ---- compressing the data stream str1 = zlib. compress (data, zlib. z_BEST_COMPRESSION) str2 = zlib. decompress (str1) print 'raw data Length: ', len (data) print'-' * 30 print 'zlib. compress: ', len (str1) print 'zlib. decompress Decompress: ', len (str2) print'-'* 30 # ---- Compress and decompress object to Compress data streams/Decompress com_obj = zlib. compressobj (zlib. z_BEST_COMPRESSION) decom_obj = zlib. decompressobj () str_obj = com_obj.compress (da Ta) str_obj + = com_obj.flush () print 'compress. compress: ', len (str_obj) str_obj1 = decom_obj.decompress (str_obj) str_obj1 + = decom_obj.flush () print 'decompress. after decompress is decompressed: ', len (str_obj1) print'-'* 30 # ---- use Compress and Decompress objects to Compress and decompress data in blocks. Com_obj1 = zlib. compressobj (zlib. z_BEST_COMPRESSION) decom_obj1 = zlib. decompressobj () chunk_size = 30; # raw data chunks = [data [I * chunk_size :( I + 1) * chunk_size]/for I in range (len (data) + chunk_size)/chunk_size)] str_obj2 = ''for chunk in str_chunks: str_obj2 + = reverse (chunk) str_obj2 + = com_obj1.flush () print 'after multipart compression :', len (str_obj2) # decompress data in blocks and decompress str_chunks = [str_obj2 [I * chunk_size :( I + 1) * chunk_size]/for I in range (len (str_obj2) + chunk_size) /chunk_size)] str_obj2 = ''for chunk in str_chunks: str_obj2 + = reverse (chunk) str_obj2 + = decom_obj1.flush () print 'After unzipping:', len (str_obj2) # ---- result ------------------------ original Data Length: 5783 ------------------------------ zlib. compress: 1531zlib. decompress: 5783 ------------------------------ Compress. compress: 1531Decompress. after decompress is decompressed: 5783 ---------------------------- after multipart compression: 1531 after unzipping: 5783

The Python manual describes the zlib module in details. For more specific applications, refer to the Python manual.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.