Why don't you shrink the database file (foreign translation) and the database translation?
Preface: I have read a lot of articles about SQL server shrinking data files over the past few days. I am going to write an article about shrinking logs, but suddenly there was an impulse to translate the classic article. The following article translated Paul Randal-"Why You shoshould Not Shrink Your Data Files ". I will paste the original article in some areas that are hard to translate and clear. Well, let's look at the translation below.
One of my biggest hot topics is about shrinking data files. Although I wrote the code for shrinking data files at Microsoft, I have no chance to rewrite it, make it easier to operate. I really don't like shrinking.
Now, do not confuse shrinking transaction log files and shrinking data files. When the growth of transaction log files gets out of control or to remove too many VLF fragments (here and here we can see the excellent Jin Bai's article ), however, do not frequently shrink transaction log data files (rare operations) and should not be part of your regular maintenance plan.
The data file should be compressed and executed even less. This is why the data file shrinkage results in a large number of index fragments. Let me use a simple step and you can run it to demonstrate. The following script creates a data file, creates a 10 MB "filler" table, and a 10 MB "production" Cluster Index, analyze the fragmentation of the new clustered index.
USE [master];
GO
IF DATABASEPROPERTYEX(N'DBMaint2008', N'Version') IS NOT NULL
DROP DATABASE [DBMaint2008];
GO
CREATE DATABASE DBMaint2008;
GO
USE [DBMaint2008];
GO
SET NOCOUNT ON;
GO
-- Create the 10MB filler table at the 'front' of the data file
CREATE TABLE [FillerTable](
[c1] INT IDENTITY,
[c2] CHAR (8000) DEFAULT 'filler');
GO
-- Fill up the filler table
INSERT INTO [FillerTable] DEFAULT VALUES;
GO 1280
-- Create the production table, which will be 'after' the filler table in the data file
CREATE TABLE [ProdTable](
[c1] INT IDENTITY,
[c2] CHAR (8000) DEFAULT 'production');
CREATE CLUSTERED INDEX [prod_cl] ON [ProdTable]([c1]);
GO
INSERT INTO [ProdTable] DEFAULT VALUES;
GO 1280
-- Check the fragmentation of the production table
SELECT
[avg_fragmentation_in_percent]
FROM sys.dm_db_index_physical_stats(
DB_ID(N'DBMaint2008'), OBJECT_ID(N'ProdTable'), 1, NULL, 'LIMITED');
GO
The execution result is as follows:
The logical fragmentation of the clustered index is approximately 0.4% before the data file is compressed. [But my test result is 0.54%, as shown in, but it is close to 0.4%]
Now I delete the filter table, run the shrink data file command, and re-analyze the fragmentation of the clustered index.
-- Drop the filler table, creating 10MB of free space at the 'front' of the data file
DROP TABLE [FillerTable];
GO
-- Shrink the database
DBCC SHRINKDATABASE([DBMaint2008]);
GO
-- Check the index fragmentation again
SELECT
[avg_fragmentation_in_percent]
FROM sys.dm_db_index_physical_stats(
DB_ID(N'DBMaint2008'), OBJECT_ID(N'ProdTable'), 1, NULL, 'LIMITED');
GO
The following is my execution result. For more information, see the original article:
Original article:
Wow! After the shrink, the logical fragmentation is almost 100%. the shrink operation * completely * fragmented the index, removing any chance of efficient range scans on it by ensuring the all range-scan readahead I/OS will be single-page I/OS.
Translation:
Wow, it's horrible! After the data file is shrunk, the logical fragmentation of the index is close to 100%, and the data file is shrunk, resulting in full fragmentation of the index. Eliminate any opportunity for effective range scanning, and ensure that all I/O operations performed for pre-read range scanning on a single page
Why? After a single data file is compressed once, it will use the GAM bitmap index to find the top allocated page in the data file and try to move forward to the place where the file can be moved, in the above example, it completely reverses the clustered index, so that it is from non-fragmented to completely fragmented.
The same code is used for dbcc shrinkfile, dbcc shrinkdatabase, and automatic contraction. They are equally bad. Just like the fragmentation of indexes, shrinking data files also produces a large number of I/O operations, it consumes a lot of CPU resources and generates * load * transaction logs, because all operations are recorded.
Data File shrinking cannot be part of regular maintenance. You cannot enable the "auto-shrinking" attribute. I try to remove it from SQL 2005 and SQL 2008 products, the only reason for its existence is to better forward compatibility. do not fall into the trap of creating a maintenance plan and re-generating all indexes, then try to reclaim the space required for re-indexing to shrink the data file-this is what you did to generate a large number of transaction logs, but in essence there is no zero-sum game to improve performance.
So why do you want to run a contraction ,? For example, if you delete a large database, the database is unlikely to grow. Or do you need to clear the data file before transferring a database file?
Translation:
The recommended method is as follows:
Create a new file group
Move all affected tables and indexes to a new file group and use the create index... WITH (DROP_EXISTING = ON) script to delete fragments in the table while moving the table.
Delete the old file groups that you want to contract. You need to contract them (or contract them down if their main file group is ).
Basically, you need to provide more space to contract the old file, but it is a clearer setting.
Original article:
The method I like to recommend is as follows:
Create a new filegroup
Move all affected tables and indexes into the new filegroup using the create index... WITH (DROP_EXISTING = ON) ON syntax, to move the tables and remove fragmentation from them at the same time
Drop the old filegroup that you were going to shrink anyway (or shrink it way down if its the primary filegroup)
Basically you need to provision some more space before you can shrink the old files, but it's a much cleaner mechanic.
If you have no choice but to shrink the log file, note that this operation will cause index fragmentation. You should take some steps in shrinking the data file to eliminate possible performance problems, the only way is to use dbcc indexdefpage or alter index... REORGANIZE does not cause the growth of data files when removing index fragments. These commands require that the extended 8 KB page instead of recreating a new index in the index reconstruction operation.
Bottom line-avoid running data file shrinking at any cost
Therefore, if you are still using a job to regularly contract data files or the database to enable the "auto contract" attribute, please correct your mistakes in time!
Support for the original work, and hope that everyone can support my hard translation work. Please add a link to Xiaoxiang's blog.