In such a business scenario, Data Writing is backflow, and usually depends on user_id queries. The data is about 0.5 billion, so I did the following test:
Performance testing machine environment:View code
Uptime | 364 days, 4:02, 2 users, load average: 0.43, 0.19, 0.06 Platform | Linux Release | Red Hat Enterprise Linux Server release 5.4 (Tikanga) Kernel | 2.6.18-164.el5Architecture | CPU = 64-bit, OS = 64-bit Threading | NPTL 2.5 Compiler | GNU CC version 4.1.2 20080704
A solution based on some tests:1. Redis Performance
Some simple tests on redis are for reference only:
Test environment: Redhat6.2, Xeon E5520 (4-core) * 2/8G, M Nic
Redis version: 2.6.9
The client machine uses redis-benchmark for simple GET and SET operations:1. 1 test a single instance
1. Value size: 10Byte ~ 1390 bytes
Processing speed: 7.5 w/s. The processing speed is limited by the processing
CPU and related configuration information in the system, you can get it through the/proc/cpuinfo file. This article makes a brief summary of the file. Unlike the/proc/cpuinfo files generated by CPUs based on different instruction sets (ISA), the/proc/cpuinfo file based on the X86 instruction set CPU contains the following:processor:0Vendor_id:genuineintelCPU Family:6Model:26Model Name:intel (R) Xeon (r) CPU E5520
According to a number of tests organized a plan:1. Redis Performance
Some simple tests for Redis are for informational purposes only:
Test environment: Redhat6.2, Xeon E5520 (4-core) *2/8g,1000m NIC
Redis version: 2.6.9
The client machine uses the Redis-benchmark simple GET, set operation:1.1 single-instance test
1. Value Size: 10byte~1390byte
Processing speed: 7.5 w/s, Spee
Performance
Some simple tests for Redis are for informational purposes only:
Test environment: Redhat6.2, Xeon E5520 (4-core) *2/8g,1000m NIC
Redis version: 2.6.9
The client machine uses the Redis-benchmark simple GET, set operation:
1.1 single-instance test
1. Value Size: 10byte~1390byte
Processing speed: 7.5 w/s, Speed limited by single-threading capability
2. Val
. Performance of the bare framework. Only the simplest route distribution is required and core functions are used.
3. benchmark performance of the standard module. The benchmark performance of a standard module refers to the benchmark performance with complete service module functions.
3.1 Environment Description
Test environment:
Uname-
Linux db-forum-test17.db01.baidu.com 2.6.9 _ 5-7-0-0 #1 SMP Wed Aug 12 17:35:51 CST 2009 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux as
. Performance of bare frame. Only the simplest route distribution, only the core function.3. Benchmark performance of standard modules. The benchmark performance of the so-called standard module refers to a benchmark performance with a full service module function.3.1 Environmental DescriptionTest environment:
Uname-aLinux db-forum-test17.db01.baidu.com 2.6.9_5-7-0-0 #1 SMP Wed 17:35:51 CST x86_64 x86_64 x86_64 gnu/linuxRed Hat Enterprise Linux as Release 4 (Nahant Update 3)8 Int
several aspects to the specific benchmark performance:
1. Bare PHP performance. Complete the basic functions.
2. The performance of the bare frame. Only the simplest route distribution, only pass through the core functions.
3. Benchmark performance of standard modules. The benchmark performance of a standard module refers to a benchmark performance with a full service module function.
3.1 Environment Description
Test environment:
Uname-aLinux db-forum-test17.db01.baidu.com 2
with the proxy server, 3) speed up the website access speed, alleviate the load of the Web server. (a), scheduling algorithm nginx upstream instructions used to specify the Proxy_pass and Fastcgi_pass used by the back-end server, the Nginx reverse proxy function, so can be combined to use to achieve load balancing purposes, Nginx also supports a variety of scheduling algorithms: 1, polling (default) each request is assigned to a different back-end server in chronological order, and if the backe
processors like vt-d INS Tructions, the cache is small, and both have a CPU cores with no hyperthreading support. At the this point, an upgrade sounds like a good idea--but what does you choose?The answer to the question all depends on how many VMs are want to run simultaneously on your hypervisor and what their PU Rposes'll be. Tasks like SQL Server, Microsoft Exchange (in least, above a certain number of users), or an applications Server require M Ore horsepower than say, a basic domain contr
and PAE (physical address extension ).
More importantly, an additional decoding stage is added to the Pentium Pro pipeline to dynamically decode x86 commands into a series of micro operations (Micro-operations), These micro-ops are composed of a series of lower-LayerMicro-Instructions(Or microcode) to execute more complex x86 native commands. These microcode sequences can be re-ordered for analysis and distributed to various execution units.
Pentium
Http://apps.hi.baidu.com/share/detail/20307453
WebLogic Server summary. Users and passwords must be provided during download. You can register one or use the Metalink account.WebLogic Server 10.x
WebLogic Server 10 for Microsoft Windows (2000,200 3, x86)-DownloadWebLogic Server 10 for Microsoft Windows (2003, itanium)-DownloadWebLogic Server 10 for Microsoft Windows (2003, 64-bit Xeon/amd64)-DownloadWebLogic Server 10 for HP-UX (11i, 11iv2, 11iv3, PA-
How to identify the number of physical CPUs, whether hyper-threading or multi-core Jun.04, 2009 in servers
Judgment basis:1. The cpu with the same core id is hyper-threading of the same core.2. A cpu with the same physical id is a thread or cores encapsulated by the same cpu.
English version:1. Physical id and core id are not necessarily consecutive but they are unique. Any cpu with the same core id are hyperthreads in the same core.2. Any cpu with the same physical id are threads or cores in th
How to identify the number of physical CPUs, the number of cores, whether hyper-threading or multi-core Jun.04, 2009in server judgment basis: 1. CPUs with the same coreid are hyper-threading of the same core. 2. A cpu with the same physicalid is a thread or cores encapsulated by the same cpu. English version: 1. Physicalidandcoreidarenotnecessarilyconsecutivebuttheyareuni how to identify the number of physical CPUs, several cores, whether hyper-threading or multi-core Jun.04, 2009 in server
Judg
I believe everyone knows about Tianhe 2, ranking first among the top 2013 and 2014 top500, which is one time faster than the top 2nd Titan, what kind of architecture does Tianhe 2 use to achieve this capability? Let's take a look at it.
Tianhe No. 2 Model for TH-IVB-FEP, using the central processor and co-processor computing architecture layout:
Tianhe 2 has a total of 16,000 computing nodes, each of which is equipped with two Xeon E5 12 core central
Heterogeneous computing:Heterogeneous computing uses different types of processors to handle different types of computing tasks. Common computing units include CPUs, GPGPU, GPDSP, Asics, FPGAs, and other types of core processors.There are many accelerator cards or coprocessors that are used to increase system performance, which are common:GPGPU is the most common accelerator card, connected by PCI-E. The GPU was first used for graphics processing cards, the graphics card, and then slowly develop
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.