By default, Elasticsearch uses a JSON string to indicate that the body of the document is saved in a _source field. Like other saved fields, fields are _source compressed before they are written to the hard disk. The _source is stored as a binary blob (which are compressed by Lucene with deflate or LZ4) actually multiple _source merged into one chunk for LZ4 compression !For SOLR: The format of the FDT and
Using tar+lz4/pigz+faster data transfer with SSH -- One-8|Ten: 41 Category: Linux,mysql |The previous article describes how to maximize the transfer speed of the SCP, with this foundation, you can further use compression to speed up the transmission. Using only SCP, the fastest transfer rate is about 90MB, this article will increase the fastest transfer rate to about 250MB by compressing/s (including the decompression process). directory [hide]1. Conc
1. Conclusion
The maximum transmission performance can be achieved by using the Tar+lz4+ssh method:
The code is as follows
Copy Code
Time Tar-c sendlog/|pv|lz4-b4|ssh-c arcfour128-O "MACs umac-64@openssh.com" 10.xxx.xxx.36 "lz4-d |tar-xc/u01/backup_supu"3.91GiB 0:00:16 [249mib/s]Real 0m16.067sUser 0m15.553sSYS 0m16.821s249mb/s,
The previous article introduced how to extract the SCP transmission speed to the maximum extent. With this foundation, we can further use compression to accelerate the transmission speed. Only scp is used, and the transmission rate is about 90 MB at the earliest. This article uses compression to increase the transmission rate to about 250 MB/s (including the decompression process ).
Directory
1. Conclusion
2. About lz4
3. Performance Environment D
The previous article introduced how to extract the SCP transmission speed to the maximum extent. With this foundation, we can further use compression to accelerate the transmission speed. Use scp only, transmission rate
The previous article introduced how to extract the SCP transmission speed to the maximum extent. With this foundation, we can further use compression to accelerate the transmission speed. Use scp only, transmission rate
The previous article introduced how to extract the SCP
Lz4, pigz, and gzip31. Compression(1.1) Use gzip for packaging:# Time tar-zcf tar1.tar binlog *Real 0m48. 497 sUser 0m38. 371 sSys 0m2. 571 s(1.2) Use pigz compression and set the maximum compression speed (-1)# Time tar-cv binlog * | pigz-1-p 24-k> pigz1.tar.gzReal 0m10. 715 sUser 0m17. 674 sSys 0m1. 699 s
(1.3) pigz compression, default compression ratio# Time tar-cv binlog * | pigz-p 24-k> pigz2.tar.gz
Real 0m22. 351 s
User 0m39. 743 s
Sys 0m1. 341
Backup mysql for xtrabackup with xbstream and lz4, xtrabackupxbstream
You need to temporarily add an instance for mysql. Using xtrabackup is the easiest and quickest.
On an existing data node:
/Home/work/app/xtrabackup-2.2.3/innobackupex -- ibbackup =/home/work/app/xtrabackup-2.2.3/xtrabackup -- parallel = 8 -- defaults-file =$ {BACKUP_CNF} -- socket = $ {BACKUP_SOCK} -- user = {BACKUP_USER} -- password =$ {BACKUP_PWD }$ {BAK} -- no-timestamp -- stre
Fast transfer of Big data (tar+lz4+PV) Time: the- A- A -: in: -Read:194Comments:0Collection:0[Point my Collection +] Tags: algorithmclassstyle src uses COM log file data if the traditional SCP remote copy, the speed is relatively slow. LZ4 compression is now used for transmission. LZ4 is a very fast lossless compression algorithm with a compression speed of 300M
Lz4, pigz, gzip 3, lz4pigzgzip1. Compression(1.1) Use gzip for packaging:# Time tar-zcf tar1.tar binlog *Real 0m48. 497 sUser 0m38. 371 sSys 0m2. 571 s(1.2) Use pigz compression and set the maximum compression speed (-1)# Time tar-cv binlog * | pigz-1-p 24-k> pigz1.tar.gzReal 0m10. 715 sUser 0m17. 674 sSys 0m1. 699 s
(1.3) pigz compression, default compression ratio# Time tar-cv binlog * | pigz-p 24-k> pigz2.tar.gz
Real 0m22. 351 s
User 0m39. 743 s
S
One, compression(1.1) Use Gzip to package:# time TAR-ZCF Tar1.tar binlog*Real 0m48.497sUser 0m38.371sSYS 0m2.571s(1.2) Use PIGZ compression while setting the maximum compression speed (-1)# Time TAR-CV binlog* | Pigz-1-P 24-k >pigz1.tar.gzReal 0m10.715sUser 0m17.674sSYS 0m1.699s(1.3) Using Pigz compression, the default compression ratio# Time TAR-CV binlog* | Pigz-p 24-k >pigz2.tar.gzReal 0m22.351sUser 0m39.743sSys 0m1.341s(1.4) Use PIGZ compression while setting the maximum compression ratio (-
To temporarily add an instance to MySQL, the simplest and fastest to use Xtrabackup.On the existing data node:/home/work/app/xtrabackup-2.2.3/innobackupex--ibbackup=/home/work/app/xtrabackup-2.2.3/xtrabackup--parallel=8- -defaults-file= ${backup_cnf}--socket=${backup_sock}--user={backup_user}--password=${backup_pwd} ${BAK}-- No-timestamp--stream=xbstream | LZ4-B4 | NC Host PortOn a temporary requisition data node:Nc-l Port |
Compression ratio, efficiency is very high, as to how to use, send the article tomorrow.1. Downloadhttp://rpmfind.net/Search Input: Lz4DownloadExtras testing Packages for Enterprise Linux 6 for x86_64lz4-1.7.5-1.el6.x86_64.rpm2. Installationhttp://rpmfind.net/# RPM-IVH lz4-1.7.5-1.el6.x86_64.rpm# Rpm-qa|grep LZ4Lz4-1.7.5-1.el6.x86_64# LZ4--helpLZ4 command Line interface 64-bits v1.7.5, by Yann Collet * * *U
Performance Comparison of Different Java compression algorithms and java Performance Comparison
This article will compare the performance of several common compression algorithms. The results show that some algorithms can still work normally under extremely demanding CPU restrictions.
The comparison is as follows:
Jdk gzip-a slow algorithm with a high compression ratio. The compressed data is suitable for long-term use. Java.util.zip. GZIPInputStream/GZIPOutputStream in JDK is the implementati
This article will compare the performance of several commonly used compression algorithms. The results show that some algorithms still work properly under extreme CPU constraints.The comparisons in this article are calculated as follows:
JDK gzip--This is a compression ratio of the slow algorithm, the compressed data for long-term use. The Java.util.zip.gzipinputstream/gzipoutputstream in the JDK is the implementation of this algorithm.
JDK deflate--This is another algorithm in the
with LZMA format has the smallest package volume (high compression ratio), but increases the decompression time accordingly.2. LZ4 Format:The version after Unity5.3 increases the LZ4 format compression, because the compression ratio of LZ4 is general, so the volume of the Assetbundle package after compression is larger (the algorithm is based on chunk).3, do not
I did a test on two virtual machines. If the traditional SCP is used for remote copy, the speed is relatively slow. lz4 compression is used for transmission now. LZ4 is a very fast lossless compression algorithm with a compression speed of MBS per core and scalability.
I did a test on two virtual machines. If the traditional SCP is used for remote copy, the speed is relatively slow.
, which corresponds to bundle one by one, is used only for incremental build, which is not required at All. two, compressed format (1) LZMA : default compression format, compression ratio is large, save space, need to extract the entire compressed package before use; (2) LZ4 : 5.3 version added, 40%–60% compression ratio, Turn on buildassetbundleoptions.chunkbasedcompression packaging Parameters. LZ
/inflaterinputstream, LZ4, snappy:checking performance of various general pur Pose Java compressors.1. If you feel that the compression data is particularly slow, then try to LZ4 (fast) implementation, it compresses a text file speed can reach the mb/sec, for most applications this speed is basically no obvious feeling. If possible, set the compression buffer size of the
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.