The database administrator sometimes needs to control the size of the file, may choose to shrink the file, or some data file condition to be deleted from the database.
At this point we are going to use the DBCC SHRINKFILE command, the script for this command is:
DBCC shrinkfile ( | file_id} {[, emptyfile] | [[, Target_size] [, {notruncate | TRUNCATEONLY}]] }) [with NO_INFOMSGS]
Because DBCC SHRINKFILE one run affects all files at once (including data files and log files), the user cannot specify the target size of each file, and the results may not meet the expected requirements. The recommendation is to plan for each file to determine the desired target, and then use DBCC SHRINKFILE to come up with a file that is more appropriate for a file.
Some points to note:
1, first to understand the use of data files currently used.
The amount of shrinkage cannot exceed the size of the current file's free space. If you want to compress the size of the database, you must first confirm that the data file does not have the appropriate space to use. If the space is in use, it is necessary to identify a large number of space-occupying objects, such as: tables or indexes, and then by archiving historical data, the space is released first.
2. The master data file (primary files) cannot be emptied. There are auxiliary data files that can be completely emptied.
3. If you want to empty a filegroup, delete the objects (tables or indexes) assigned to the filegroup, or move them to another filegroup, DBCC Shrinkfile will not do the work for you.
After the deleted data and objects in the data file are purged and the data files (groups) are cleared, the administrator can lower or empty the specified file by following the DBCC SHRINKFILE command. If you want to reduce the file, fill in the Tearget_size, if you want to empty the file, select Emptyfile. When SQL Server is doing DBCC SHRINKFILE, it scans the data file and locks the page being read. Therefore, the performance of the database will be affected. But this is not an exclusive behavior, that is, when shrinking, other users can still read and write access to the database. Therefore, you do not need to schedule server downtime alone, generally in the database maintenance time can be carried out. You can stop the DBCC SHRINKFILE operation at any point in the process, and any completed work will be preserved. If the operation is not completed within the specified time, it can be completely stopped.
However, sometimes obviously see the data file There is space, why is not compressed or the situation it? This is usually because there are a lot of empty pages in the data file, but these pages are scattered across the extents, leaving the entire file without a lot of empty space.
It is necessary to note that the DBCC shrinkfile do all the action at the district level. It will move the used area forward and remove the unused area from the file. However, it does not remove empty pages from a zone, merge areas, and do not remove or merge pages in the space inside the page. Therefore, there are many areas in a database that use only one or two pages, and the effect of DBCC shrinkfile is not obvious.
The following example shows the process:
We create a new table for each row that will occupy a page. There is no clustered index on the table, so it is a heap table. Insert 8,000 data inside:
CREATE TABLE Show_extent (aint, b nvarchar (3900)) Godeclare @iintSet@i=1 while@i<= +begin INSERT INTO show_extent values (1, Replicate (N'a',3900) ) insert into show_extent values (2, Replicate (N'b',3900) ) insert into show_extent values (3, Replicate (N'C',3900) ) insert into show_extent values (4, Replicate (N'D',3900) ) insert into show_extent values (5, Replicate (N'e',3900) ) insert into show_extent values (6, Replicate (N'F',3900) ) insert into show_extent values (7, Replicate (N'g',3900) ) insert into show_extent values (8, Replicate (N'h',3900)) Set@[email protected]+1enddbcc Showcontig ('show_extent') Go
You can see that the table has 1003 extents, then the average area has 8 pages, a total of 8,000 pages ... Of course, there's a lot of debris in the area, so there's more than 3 zones, 24 pages of redundancy
We delete some of the data below, preserving only those records of a=5 to compare the space between the front and back
where a<>5gosp_spaceused show_extentgodbcc showcontig ('show_extent ') go
Can see ... The page space after deletion is not released. Just narrowed it down a little.
The area has not changed, the number of pages has been reduced by about half, that is, each area on average now only 4.1 pages, in this case shrinking the database is ineffective.
We'll shrink a little bit below.
DBCC shrinkfile (1,+)
You can see that the number of pages has not decreased and increased. The reason for this place is that it is calculated according to the logical partition: 1002*8 is approximately equal to 8016.
You can see that there is very little space to shrink. And a good solution is to reorganize the page, but that will affect performance, which means it will reorganize the page and populate the page.
Of course, we have another solution here, to rearrange the pages again by rebuilding the index. Now that there is no clustered index, we give him a new
Create clustered index show_ion show_extent (a) godbcc showcontig ('show_extent ')
Can see ... Cut it down to 1000 pages, 125 districts, and we'll see how much data we can shrink.
Now you can see the percentage that can be shrunk. Let's do the shrink data operation.
You can see that we've shrunk the data to 5120, which means that shrinking the data already works.
If you do not want to create a clustered index, you can remove the data from the table, then clear the table and then plug the data back in. Of course, this is more troublesome, or have a clustered index management is more convenient.
The situation that caused shrinkfile effect was just mentioned. On a table with a clustered index, you can resolve it by rebuilding the clustered index. However, if the area contains data types such as text or image, SQL Server will store the data in separate pages. If this type of storage page also has the same problem, as with the heap, doing index reconstruction will not affect them.
The way this object is handled is to find the objects that might be problematic, and then reestablish them. You can use the DBCC EXTENTINFO command to play out the allocation information for all the extents in the data file. It then calculates the number and actual number of each object theoretically. If the actual number is much larger than the number of theories. The object is that there are more fragments, and we need to consider rebuilding the object.
Let's look at an example below:
ifExistsSelectName fromsysobjectswhereNAME ='Extentinfo'and type='U') drop table extentinfogocreate table Extentinfo ([file_id] smallint,page_idint, Pg_allocint, Ext_sizeint, obj_idint, index_idint, Partition_numberint, partition_id bigint,iam_chain_type varchar ( -), Pfs_bytes varbinary (Ten)) GoifExistsSelectName fromsysobjectswhereNAME ='Import_extentinfo'and type='P') drop procedure Import_extentinfogocreate procedure Import_extentinfo asDBCC EXTENTINFO ('TestDB') Goinsert extentinfo exec Import_extentinfogoSelectName astable_name,[file_id],obj_id, index_id, partition_id, Ext_size,'actual page Count'=sum (pg_alloc),'actual extent Count'=count (*),'expected extent Count'=ceiling (SUM (pg_alloc) *1.0/ext_size),--The sum of all the extents of a single object'expected extents/actual Extents'= (Ceiling (sum (pg_alloc) *1.00/ext_size) *100.00)/COUNT (*) fromextentinfo INNER JOIN Sysobjectson obj_id=idGROUP by [file_id],obj_id, index_id,partition_id, Ext_size, namehaving count (*)-ceiling (SUM (pg_alloc) *1.0/ext_size) >0ORDER by partition_id, obj_id, index_id, [file_id]
Here we use the DBCC EXTENTINFO command to view the details of the extents in the database, and then look at the difference between the theoretical and the actual values, and if there is a lot of fragmentation, we need to do a rebuild cleanup.
The DBCC extentinfo command is used to query a database, or a data object (primarily a data table) for the allocation of extents. The syntax structure is as follows:
Dbcc
Extentinfo (DBNAME,TABLENAME,INDEXID)
We give this command to show the line details:
< P class= "a" align= "center" > |
say |
file_id |
data file number for database |
< Span style= "Font-size:xx-small;" >page_id |
The page number of the first page in a region |
Field name |
Description |
Pg_alloc |
The number of pages allocated for the database m(1≤m≤8) |
Ext_size |
Size of the extents, in page units |
object_id |
The ID of the database object |
index_id |
Represents the type of the data object |
Partition_number |
Partition code |
Rows |
Approximate number of rows of data |
hobt_id |
Storage unit ID of the heap or B-tree where the data is stored |
Let's find a database to look at the fragmentation situation:
You can see that the arrows refer to this row of data, the theory should be a zone on it, but in fact it establishes two partitions ... So this situation can be considered to defragment, to rebuild, to release debris.
SQL Server Enterprise Platform Management Practices book notes--sql server shrinking database causes of poor use