First of all, special thanks to Changchong students to use their spare time for the pressure test, in order to provide professional testing data
First, test environment preparation
Cpu |
Memory |
HDD |
Intel (R) Xeon (r) CPU E5520 @ 2.27GHz |
32G |
6T |
Kafka cluster, number of servers: 3 units
Using CMS garbage collection
JVM Run parameters
- xmx1g -xms1g -server-xx: +USEPARNEWGC -xx: + USECONCMARKSWEEPGC - xx: +cmsclassunloadingenabled -xx: +cmsscavengebeforeremark -xx : +DISABLEEXPLICITGC -djava. awt.headless=true - XLOGGC:/usr/local/kafka_2. Ten-0. 8.2. 2/bin/. /logs/kafkaserver-gc.log-verbose:GC- xx: +printgcdetails -xx: +printgcdatestamps -xx: + Printgctimestamps - DCOM. Sun.management.jmxremote-DCOM. sun.management.jmxremote.authenticate=False - Dcom. sun.management.jmxremote.ssl=false - Dkafka. logs.dir=/usr/local/kafka_2.10-0.8.2.2/bin/. /logs - dlog4j. configuration=file:/usr/local/kafka_2. Ten-0. 8.2. 2/bin/. /config/log4j.properties
Kafka Server-side configuration
Broker. ID=165port=9092Host. Name=hadoop165. Kuaiyong. inNum. Network. Threads=3Num. IO. Threads=8Socket. Send. Buffer. Bytes=102400Socket. Receive. Buffer. Bytes=102400Socket. Request. Max. Bytes=104857600Log. dirs=/download/kafka-logsnum. Partitions=1Num. Recovery. Threads. Per. Data. Dir=1Log. Retention. Hours=168Log. Segment. Bytes=1073741824Log. Retention. Check. Interval. Ms=300000Log. Cleaner. Enable=falsezookeeper. Connect=hadoop165. XXX. in:2181, hadoop166. XXX. in:2181, hadoop167. XXX. in:2181Zookeeper. Connection. Timeout. Ms=6000
The test command line is as follows
Producers:
Bin/kafka-producer-Perf-Test.SH --Broker-list=hadoop02:9092 --Messages 100000 --Topic S1 --Threads Ten --message-size + --Batch-size $ --Compression-codec 1
Consumers
bin/kafka - Span class= "hljs-comment" >consumer - perf - test sh - - zookeeper hadoop03:2181 - - messages 500000 - - Span class= "hljs-comment" >topic s1 - Span class= "hljs-literal" >- threads 1
Second, the normal request Test 1, Producer:
Data Volume: 2.3 Million records
Send 1000 data per package
Data format: in compressed format
Test Results
Max processing Capacity: 39.2501mb/s
tps:41156.6817 Strips
2, Consumer
Time: 18 seconds
Overall file size: 2193.45MB
Max processing Capacity: 163.6659mb/s
tps:171616.1767 Strips
Three, pressure request Test 1, Producer
Data Volume: 10 million data
Send 1000 data per package
Data format: in compressed format
Test Results
Time: 242 seconds
Overall file size: 9536.74MB
Max processing Capacity: 39.2531mb/s
tps:41159.8856 Strips
2, Consumer
Time: 70 seconds
Overall file size: 9536.74MB
Max processing Capacity: 145.4193mb/s
tps:152483.1887 Strips
Conclusion: Under 10 million pressure test, the performance is reduced, and the bottleneck period is estimated at about 5 million
Kafka Performance Test Analysis