Investigation report of MSSQL data read and write performance

來源:互聯網
上載者:User

MSSQL 2008

CPU Xeon E5 2G (12 CORE ) X 2

RAM 128G

case 1:

while @i<1,000,000beginInsert into T1(column1,column2,column3,...column34) values(@i,getdate(),rand()*getdate,......)end

Test result : 1,000,000 rows  time:  00:04:28  size 503MB


case 2:
insert into T1 select * from T1

Test result : 1,000,000 rows insert to Table(1,000,000 rows exist)  time:  00:00:51 


case 3:
select * into NewTableName from T1

Test result : 2,000,000 rows insert to NewTable  time:  00:00:22


select * into TableName from T1

Test result : 2,000,000 rows insert to TableName(2,000,000 rows exist)  time:  00:01:00


case 4:
Tmp(20,000,000 rows)  Target ( 20,000,000 rows)
Open 10 threads run this sql in same time
insert into Target select * from Tmp

Test result : 2,000,000X10 rows insert to Target(20,000,000 rows exist)  time:  04:29:00


case 5:
cmd>bcp T1 out t1.dat -f bcp.Xml -T -d database

Test result : 20,000,000 rows export to t1.dat  time:  00:00:20
cmd>bcp Target in t1.dat -f bcp.Xml -T -d database

Test result : 20,000,000 rows import from t1.dat to Target(500,000,000 rows exist )  time:  00:50:01

case 6:

bcp import 20,000,000 X 2 rows use 2 thread to same blank table 

cmd>bcp Target in t1.dat -f bcp.Xml -T -d database

Test result : thread 1 time:  00:08:35  speed :38761 rows/sec  ;  thread 2 time: 00:07:37  speed: 87359 rows/sec  

case 7:

bcp export 639,160,000 rows to a file 

cmd>bcp Target out target.dat -f bcp.Xml -T -d database

Test result : time : 01:24:17 speed: 126,380 rows/sec 

case 8:

bcp import 639,160,000 rows to target table

cmd>bcp Target in target.dat -f bcp.Xml -T -d database -b 5000

Test result : Very slow, canceled. So, you'd better don't use -b (aka batch size) option. it would split job into lots of transactions. if without -b option, it would be only one transaction.


use bcp to import 639,160,000 rows to target table( over 700,000,000 rows exist )

cmd>bcp Target in target.dat -f bcp.Xml -T -d database -h "TABLOCK"

Test result:  time:  12:50:35  speed:  13,824 rows / sec


case 9:

use bulk insert clause to import 20,000,000 rows to target table ( 640,000,000 rows exist )

bulk insert Target from 'tmp.dat' with (formatfile = 'bcp.Xml', tablock)

Test result:  time : 01:33:06  ( compare with case 5 )



大家有更快的方案可以分享一下嗎?


相關文章

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.