fw:http://xulingbo.net/?p=434 copyright belongs to Xilingpo. Here is just a reprint.
There are test data in the different ways that are described in the following sections, which are test results in the same test environment:
The test machine is configured as follows:
64-bit 5-core CPU, E5620 @ 2.40GHz, memory 8G
CDN Side Cache
Because the value of the counter is not, the specific value is how much, especially for some large access to the number of items or 10 bits of data does not make any sense, so the counter access to these popular products can be updated in a timely way, you can cache the value of the counter directly on the CDN or back-end nginx cache, It is time to get the latest counter value on the database server, which can greatly reduce the access request to the back-end server, and the data volume of the counter is very small to the cache server's space requirement.
The improved structure diagram is as follows:
The value of the popular counter is cached by using the cache policy directly in Nginx, and the value of the counter is updated by using the HTTP protocol's Cache+max age to invalidate the cache.
Advantages:
The implementation of simple, small changes, can block the hot goods counter access requests, in this way to query requests, can achieve similar to the performance of the static server, such as Nginx can achieve 2w of QPS
Disadvantage: There is no problem solving the same commodity counter merge request, the data volume will increase by one times the update request has no way to cache, can only reduce the pressure of the query request
Java-based storage methods
Because of the current use of Nginx module development, each modification to recompile Nginx server, so want to adopt a Java-based approach, making maintenance easier.
Using Ehcache as a data storage server, Ehcache is also based on memory storage, which supports timed persistence and is ideal for storing small data types like counters. To process HTTP requests using the Tomcat container, the structure diagram is as follows:
The processing logic takes a servlet implementation and, in this servlet, gets the counter value from the Ehcache through a consistent hash.
In the actual deployment structure, Tomcat and Ehcache can be deployed on the same machine.
The test results based on this pattern are as follows:
The QPS can reach around 1.3w , and the performance bottleneck is the poor performance of Tomcat over Apache and Nginx processing requests on Tomcat processing HTTP connections.
Advantages: The business logic and storage server are developed in Java, which is good for maintenance and can solve the problem of data merging.
Disadvantages: Java server in processing HTTP short connection request than Nginx and other server performance slightly worse, with the current 2W QPS some gap
Based on Nginx+redis
Last time to participate in the velocity conference to understand Sina, Baidu these companies, the internal counters are using Redis as a storage server, performance is very good, the official test results are back to test the performance of Redis, performance is good, a single Redis machine can support the nearly 6w QPS
The test results are as follows:
Write:
Read:
Read and write performance can basically reach 6w (8g,5 core virtual machine) around, 5-core CPU will only be a CPU full. As shown in the following:
The full memory operation is better than the Tair 5w (24g,8 core solid-state machine).
Redis currently supports a variety of client connections, and I have tested the Apache-based PHP client, and the C client directly based on Nginx
Nginx-based C-Client testing
and tert-degree understanding of an AB pressure-less nginx performance bottleneck, so the use of two AB pressure nginx, the results are as follows:
Two add up can reach about 1.7w of QPS, if the use of the 8-core CPU on the line, to reach more than 2W of the QPS certainly no problem.
Apache-based PHP client test results are as follows:
Two AB test Apache can reach about 1.1w of QPS, than the Nginx-based C-client difference.
In both cases, their structure is shown below
Nginx can directly use a consistent hash of the same product ID or user ID mapped to the same Redis server, on Nginx development of a C module to simply out of the operation of the counter.
Advantages: performance is very good, front-end processing request with Nginx server, processing short connection performance is very good, the back end uses Redis storage, performance is better than tair, and support data grouping, can be based on the product ID and user ID data merge.
disadvantage: the same should be developed based on a C module Nginx module, later maintenance also need a little trouble.
Tair LDB and Redis's pair test
- Made the next Tair version of LDB and the persistent contrast test of the Redis VM form, thanks to Zong Dai configuration parameters for the LDB version configured with Tair
- The test machine is the same configured 64-bit 5-core CPU, E5620 @ 2.40GHz, memory 8G
Virtual machines
- Test data: Created 100 million (0~100000000 integer) bar with different key data
- Test method: Two ways to test Tair and Redis separately:incr and get read-write interfaces
- Data read and write range is: In 0~1 billion data, Random Read and write .
- The test results are as follows:
- CPU Resource Comparison
Tair:
Redis:
- Tair's 5 cores are basically used, and the load gets around 4.
- Redis uses only one core, the other 4 cores are not, and the load remains below 1, so multiple Redis instances can be deployed in multi-core scenarios for better performance
- The TPS comparison of the write situation:
- Tair's write average TPS at around 1.2w
- Redis's write average TPS is around 5.4w
- Comparison of reading conditions:
- Tair's reading average TPS is around 2.5w
- Redis's write average TPS is around 5.5w
Ehcache/redis/tair Cache Performance Comparison [reprint]