First, write in front-want to say love you not easy in order to upgrade the database to SQL Server R2, took an existing PC to do the test, the database from the Official library restore (3 database size exaggerated to reach 100g+), and machine memory only poor 4G, not only to assume the DB Server role, but also as a Web server, it is conceivable that the fate of this machine and its tragic, as long as MS SQL Server started, memory usage soared to 99%. No way, only up memory, two 8G total 16G of memory exchange, the result is the same, the memory is instantly killed (CPU utilization in 0% hovering). Because it is a PC, memory slots, the largest single memory in the market is 16G (price 1k+), even if the purchase of memory is not enough (lying trough, PC machine hurt AH), it seems that there is no method--delete data!!! Delete data-easy to say, isn't it a delete? By, if really do, I xxx estimate can "know Shanghai 4 o'clock in the morning appearance" (Kb,sorry, who let me is xxx of programmer, elder brother in this respect absolute than you ox x), and estimate will Bauku (insufficient disk space, produce log file is too big). Second, the battlefield soldiers of-to find him thousands of times in order to better explain the difficulties and problems I encountered, it is necessary to do some necessary tests and explanations, but also this is a way to solve the problem of a probe. Because after all this problem is how to better and faster operation of the data, in the final analysis is the delete, UPDATE, INSERT, TRUNCATE, drop, and other optimization operation combination, our goal is to find the best and fastest way. For ease of testing, a test sheet was prepared. Employee Copy Code--create table Employeecreate table [dbo]. [Employee] ([Employeeno] INT PRIMARY KEY, [employeename] [nvarchar] () null, [CreateUser] [nvarchar] () null, [Createdatetime] [da Tetime] NULL); Copy code 1. Data insertion PK1.1. Loop insert, execution time is 38026 milliseconds copy code--loop Insert set STATISTICS on;declare @Index INT = 1;declare @Timer DATETIME = GETDATE (); While @Index <= 100000BEGIN INSERT [dbo]. [EmplOyee] (Employeeno, EmployeeName, CreateUser, Createdatetime) VALUES (@Index, ' employee_ ' + CAST (@Index as CHAR (6)), ' System ', GETDATE ()); SET @Index = @Index + 1; Endselect DATEDIFF (MS, @Timer, GETDATE ()) As [Execution time (MS)]; SET STATISTICS time OFF; Copy code 1.2. Transaction loop Insert, execution time is 6640 milliseconds copy code--transaction loop begin TRAN; SET STATISTICS time on;declare @Index INT = 1;declare @Timer DATETIME = GETDATE (); While @Index <= 100000BEGIN INSERT [dbo]. [Employee] (Employeeno, EmployeeName, CreateUser, Createdatetime) VALUES (@Index, ' employee_ ' + CAST (@Index as CHAR (6)), ' System ', GETDATE ()); SET @Index = @Index + 1; Endselect DATEDIFF (MS, @Timer, GETDATE ()) As [Execution time (MS)]; SET STATISTICS time OFF; COMMIT; Copy code 1.3. Bulk INSERT, Execution time 220 MS copy code set STATISTICS times on;declare @Timer DATETIME = GETDATE (); insert [dbo]. [Employee] (Employeeno, EmployeeName, CreateUser, Createdatetime) SELECT TOP (100000) Employeeno = Row_number () over (ORDER by c1.[ OBJECT_ID]), ' employee_ ', ' System ', GETDATE () from SYS. COLUMNS as C1 cross JOIN SYS. COLUMNS as C2order by C1.[object_id]select DATEDIFF (MS, @Timer, GETDATE ()) As [Execution time (MS)]; SET STATISTICS time OFF; copy code 1.4. The CTE is inserted, and the execution time is 220 milliseconds. Copy code set STATISTICS times on;declare @Timer DATETIME = GETDATE (); With CTE (Employeeno, EmployeeName, CreateUser, Createdatetime) as (SELECT TOP (100000) Employeeno = Row_number () over (Orde R by C1. [object_id]), ' employee_ ', ' System ', GETDATE () from SYS. COLUMNS as C1 cross JOIN SYS. COLUMNS as C2 ORDER by C1. [object_id]) INSERT [dbo]. [Employee] SELECT Employeeno, EmployeeName, CreateUser, createdatetime from CTE; SELECT DATEDIFF (MS, @Timer, GETDATE ()) As [Execution time (MS)]; SET STATISTICS time OFF; Copy Code Summary: The efficiency of the CTE is the same as the batch insert efficiency, the fastest, the transaction insertion, the single-loop insertion speed is the slowest, the single-loop insertion speed is the slowest because the insert has a log every time. Transaction insertions greatly reduce the number of write logs, BULK insert only once log, the CTE is based on the CLR, the use of speed is the fastest. 2. Data deletion PK2.1. The loop is deleted, and the execution time is 1240 milliseconds. Copy code set STATISTICS times on;declare @Timer DATETIME = GETDATE ();D elete from [dbo]. [Employee]; SELECT DATEDIFF (MS, @Timer, GETDATE ()) As [Execution time (MS)]; SET STATISTICS time OFF; Copy code 2.2. Bulk Delete, execution time 106 MS copy code set STATISTICS times On;declare @Timer datetIME = GETDATE (); SET ROWCOUNT 100000; While 1 = 1BEGIN BEGIN TRAN DELETE from [dbo]. [Employee]; COMMIT IF @ @ROWCOUNT = 0 break; Endset ROWCOUNT 0; SELECT DATEDIFF (MS, @Timer, GETDATE ()) As [Execution time (MS)]; SET STATISTICS time OFF; Copy code 2.3. Truncate DELETE, execution time is 0 milliseconds copy code set STATISTICS times on;declare @Timer DATETIME = GETDATE (); TRUNCATE TABLE [dbo]. [Employee]; SELECT DATEDIFF (MS, @Timer, GETDATE ()) As [Execution time (MS)]; SET STATISTICS time OFF; Copy code summary: Truncate too fast, clear 10W of data a little pressure, bulk deletion of the second, the final delte is too slow, truncate fast because it belongs to the DDL statement, will produce very few logs, The normal delete will not only generate logs, but will also lock records. Third, sharpening-the half covered by the upper 2nd we know, insert the fastest and delete the fastest way is BULK INSERT and truncate, so in order to achieve the purpose of deleting big data, we will also adopt the combination of the two methods, the main idea is to put the data needed to keep the new table, Then truncate the data in the original table, and finally the data into the batch back, of course, the realization of the way can be casually flexible. 1. Save the required data in the new table->truncate the original table data, the original table of the data that is retained before the restore script resembles the following select * into #keep from Original WHERE createdate > ' 2011-12- TRUNCATE table Originalinsert Original SELECT * FROM #keep The first statement will store all the data to be retained in the table #keep (table #keep does not need to be created manually, in effect by SELECT into), # The keep will copy the table structure of the original table original. PS: If you only want to create a table structure, but do not copy the data, the corresponding script is the following select * into #keep FROM Original WHERE 1 = 2 The second statement is used to clear the entire table of data, and the resulting log file can be ignored; The third statement is used to restore the persisted data. A few notes: You can create a #keep by writing a script (or copying an existing table) without a SELECT into, but the latter has the disadvantage of not having a SQL script to get the corresponding table generation script (I mean scripts that are exactly the same as the original table, that is, the basic columns, properties, indexes, constraints, etc.), and when the table you want to manipulate is a long one, you're probably going to freak out, since the 1th defect, consider creating a new database? You can use existing scripts, and the resulting database is basically the same, but I tell you it's best not to do this because the first one is to cross the library, and second, you have to prepare enough disk space. 2. New table structure, BULK insert data to be persisted->drop the original table, rename the new table as the original table create table #keep as (XXX) xxx--using the method mentioned above (using the creation script of the existing table), but not guaranteed to be fully consistent; Nsert #keep SELECT * from Original WHERE clause DROP tbale Original EXEC sp_rename ' #keep ', ' Original ' this way is slightly faster than the first method because the province A little bit of data restore (that is, the last step of data recovery), but it's a little tricky, because you need to create a table structure that is exactly the same as before, including basic columns, attributes, constraints, outright, and so on. Third, data contraction-autumn leaves less deciduous data deleted, found that the size of the database footprint has not changed, at this time we use the use of strong data contraction function, the script is as follows, the running time is uncertain, depending on your database size, more than 10 minutes, less instantaneous seconds to kill DBCC SHRINKDATABASE (db_name)
Delete large amounts of data from the SQL Server family