Service hardware performance test under freebsd

Source: Internet
Author: User

Freebsd uses the overall solution and simple kernel to impress us. A bigger reason to use it is its outstanding software package management. The Free Software in unix/linux is much more convenient than that in windows, regardless of port or pkg_add.

Back to the topic, everything can only be measured. Server hardware performance, software resource occupation, and performance are part of our overall solution.

The use of unixbench and built-in diskinfo in freebsd is a good tool for testing performance. The former uses port to install,/benchmarks/unixks.

======================================

Hardware:

Xeon E5320 4 core 1.86 GHz
4-6G, SAS 15000 73G, Raid0

====================================

Unix00004.1

4 GB memory, no RAID 0, single hard disk

TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 10871107.7 931.5
Double-Precision Whetstone 55.0 2334.0 424.4
Execl Throughput 43.0 1656.7 385.3
File Copy 1024 bufsize 2000 maxblocks 3960.0 53064.0 134.0
File Copy 256 bufsize 500 maxblocks 1655.0 52611.0 317.9
File Copy 4096 bufsize 8000 maxblocks 5800.0 75403.0 130.0
Pipe Throughput 12440.0 696700.6 560.0
Pipe-based Context Switching 4000.0 97562.5 243.9
Process Creation 126.0 5239.9 415.9
Shell Scripts (8 concurrent) 6.0 1074.4 1790.7
System Call Overhead 15000.0 418358.8 278.9
==========
Final score 399.1

(Virtualbox freebsd 286.7 in imac)

==================================

Dual CPU, no raid, single hard disk

TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 10705009.7 917.3
Double-Precision Whetstone 55.0 2299.4 418.1
Execl Throughput 43.0 1461.9 340.0
File Copy 1024 bufsize 2000 maxblocks 3960.0 74216.0 187.4
File Copy 256 bufsize 500 maxblocks 1655.0 47267.0 285.6
File Copy 4096 bufsize 8000 maxblocks 5800.0 82344.0 142.0
Pipe Throughput 12440.0 694449.9 558.2
Pipe-based Context Switching 4000.0 49442.0 123.6
Process Creation 126.0 2502.7 198.6
Shell Scripts (8 concurrent) 6.0 639.5 1065.8
System Call Overhead 15000.0 417766.4 278.5
==========
Final score 323.3

==================

Dual CPU, RAID 0
TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 10827829.1 927.8
Double-Precision Whetstone 55.0 2299.1 418.0
Execl Throughput 43.0 1396.8 324.8
File Copy 1024 bufsize 2000 maxblocks 3960.0 123192.0 311.1
File Copy 256 bufsize 500 maxblocks 1655.0 57031.0 344.6
File Copy 4096 bufsize 8000 maxblocks 5800.0 125188.0 215.8
Pipe Throughput 12440.0 696110.8 559.6
Pipe-based Context Switching 4000.0 49654.9 124.1
Process Creation 126.0 2446.5 194.2
Shell Scripts (8 concurrent) 6.0 663.2 1105.3
System Call Overhead 15000.0 418897.5 279.3
==========
Final score 357.4

======================

Single CPU, RAID0

INDEX VALUES
TEST BASELINE RESULT INDEX

Dhrystone 2 using register variables 116700.0 10768717.2 922.8
Double-Precision Whetstone 55.0 2304.3 419.0
Execl Throughput 43.0 1659.2 385.9
File Copy 1024 bufsize 2000 maxblocks 3960.0 125472.0 316.8
File Copy 256 bufsize 500 maxblocks 1655.0 64471.0 389.6
File Copy 4096 bufsize 8000 maxblocks 5800.0 127750.0 220.3
Pipe Throughput 12440.0 695856.9 559.4
Pipe-based Context Switching 4000.0 89720.7 224.3
Process Creation 126.0 4944.5 392.4
Shell Scripts (8 concurrent) 6.0 1069.5 1782.5
System Call Overhead 15000.0 419510.4 279.7
==========
Final score 432.7
==================

Summary:

In general, hard disk I/O scores are low. Through Raid0, the performance has improved by 50%!

The dual-CPU does not bring about a higher score, but the lower score, it should be that unix cannot bring out smp performance.

====================

Vimdisk-vt *** Test

Single Disk
Seek times:
Full stroke: 250 iter in 2.284611 sec = 9.138 msec
Half stroke: 250 iter in 1.708564 sec = 6.834 msec
Quarter stroke: 500 iter in 2.904974 sec = 5.810 msec
Short forward: 400 iter in 0.999326 sec = 2.498 msec
Short backward: 400 iter in 1.422588 sec = 3.556 msec
Seq outer: 2048 iter in 0.644016 sec = 0.314 msec
Seq inner: 2048 iter in 0.646552 sec = 0.316 msec
Transfer rates:
Outside: 102400 kbytes in 1.121078 sec = 91341 kbytes/sec
Middle: 102400 kbytes in 1.256561 sec = 81492 kbytes/sec
Inside: 102400 kbytes in 1.718713 sec = 59579 kbytes/sec
RAID0
Seek times:
Full stroke: 250 iter in 0.757134 sec = 3.029 msec
Half stroke: 250 iter in 1.734370 sec = 6.937 msec
Quarter stroke: 500 iter in 2.851250 sec = 5.702 msec
Short forward: 400 iter in 1.181895 sec = 2.955 msec
Short backward: 400 iter in 1.533171 sec = 3.833 msec
Seq outer: 2048 iter in 0.637557 sec = 0.311 msec
Seq inner: 2048 iter in 0.646973 sec = 0.316 msec
Transfer rates:
Outside: 102400 kbytes in 0.928369 sec = 110301 kbytes/sec
Middle: 102400 kbytes in 0.914266 sec = 112002 kbytes/sec
Inside: 102400 kbytes in 0.914666 sec = 111953 kbytes/sec

Conclusion: RAID0 improves the performance by 20%-87%.

======================================

Summary:

Measurement progress. Hard Disk performance is the foundation. You can use unix built-in and open-source tools to establish baselines!

The RAID solution of the server can improve IO performance and overall performance.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.