---- Why does dbcc shrinkfile not work? ------> TravyLee generates test data if OBJECT_ID ('testdb') is not nullddrop database testdbgocreate database testdb; gouse testdbgoif OBJECT_ID ('test ') is not nullddrop table testgocreate table test (a int, B nvarchar (3900) godeclare @ I intset @ I = 1 while @ I <= 1000 begininsert into test VALUES (1, REPLICATE (N 'A', 3900) insert into test VALUES (2, REPLICATE (N 'B', 3900) insert into test VALUES (3, REPL ICATE (N 'C', 3900) insert into test VALUES (4, REPLICATE (N 'D', 3900) insert into test VALUES (5, REPLICATE (N 'E', 3900) insert into test VALUES (6, REPLICATE (N 'F', 3900) insert into test VALUES (7, REPLICATE (N 'G', 3900) insert into test VALUES (8, REPLICATE (N 'h', 3900 )) set @ I = @ I + 1end -- select * from test use the DBCC SHOWCONTIG command to view the storage data of this table dbcc showcontig ('test ') -- result 1/* dbcc showcontig is scanning the 'test' table... table: 'test' (212105 8592); index ID: 0, Database ID: 9 has performed a TABLE-level scan. -Number of scanned pages ................................: 8000-Number of scan zones ..............................: 1002-Number of Area switches ..............................: 1001-average page number for each partition ........................: 8.0-scan density [optimal count: actual count] ......: 99.80% [1000:1002]-scan fragments ..................: 0.20%-average number of available bytes per page .....................: 279.0-average page density (full ).....................: 96.55% DBCC execution is complete. If DBCC outputs an error message, contact the system administrator. */From the above results, we can see that the data storage application for this table has 8000 pages, and now seven pages in each partition are deleted, delete test where a <> 5go use the system stored procedure sp_spaceused to view the table space information sp_spaceused testgo/* sort -------- ----------- -- keep test1000 64008 kb1_2 KB8 KB31008 KB */use the dbcc showcontig command to view the storage conditions. dbcc showcontig (test) -- result 2/* dbcc showcontig is scanning the 'test' table... table: 'test' (2121058592); index ID: 0, Database ID: 9 executed TABLE-level scanning. -Number of scanned pages ................................: 4124-Number of scan zones ..............................: 1002-Number of Area switches ..............................: 1001-average page number for each partition ........................: 4.1-scan density [optimal count: actual count] ......: 51.50% [516: 1002]-area scan fragments ..................: 0.20%-average number of available bytes per page .....................: 6199.0-average page density (full ).....................: 23.41% DBCC execution is complete. If DBCC outputs an error message, contact the system administrator. */Compare result 1 and result 2. ------------------------------------------------- number of page scan areas: 80001002, 4124, 1002, And. We can easily find that there are nearly half of the data in the preceding table. when the page is not released, we will contract the file: dbcc shrinkfile (1, 40)/* DbIdFileIdCurrentSizeMinimumSizeUsedPagesEstimatedPages ----------------------------------------------------- ------------------------- 91816828811601160 */through this result, we can calculate the size of the data file being used -- (8168*8.0) /1024 = 63.812500 M -- exactly the size of 1000 zones. This proves that we have shrunk the dbcc shrinkfile () of the database) how can we solve this problem? If this index is marked with a clustered index, we can re-create the index to sort the page once, but this table does not have a clustered index. Then I create a clustered index: create clustered index test_a_idx on test () go -- run the dbcc showcontig (test) command to view the table Storage status. dbcc showcontig (test)/* dbcc showcontig is scanning the 'test' table... TABLE: 'test' (2121058592); index ID: 1, Database ID: 9 the TABLE-level scan has been performed. -Number of scanned pages ................................: 1000-Number of scan zones ..............................: 125-Number of Area switches ..............................: 124-average page number for each partition ........................: 8.0-scan density [optimal count: actual count] ......: 100.00% [125:125]-logical scan fragmentation ..................: 0.00%-scan fragments ..................: 0.00%-average number of available bytes per page .....................: 273.0-average page density (full ).....................: 96.63% DBCC execution is complete. If DBCC outputs an error message, contact the system administrator. */Through the above results, we can find that after creating a clustered index, the data originally stored in the heap is stored as a new B-tree. The original page is released, and the Occupied partitions are also released. When dbcc shrinkfile is used again, dbcc shrinkfile (91512028811681168)/* too many */is effective because the data storage page is scattered in the partition, resulting in poor SHRINKFILE performance. In a table with clustered indexes, this problem can be solved by re-indexing. If text or image data is stored in the data, SQL Server uses a separate page to store the data. If such a problem occurs in the area where the page is stored, index reconstruction like the heap will not affect them. The simple method is to find out all the problematic objects and recreate them. You can use the dbcc extentinfo command to open the partition allocation information in the data file. Then calculate the number of theoretical partitions and actual number of each object. If the actual number is much larger than the theoretical number, this object is too many fragments, you can consider recreating an object -- or using the data just now as an example to demonstrate how to find these objects to be rebuilt: drop table testgoif OBJECT_ID ('test') is not nullddrop table testgocreate table test (a int, B nvarchar (3900) godeclare @ I intset @ I = 1 while @ I <= 1000 begininsert into test VALUES (1, REPLICATE (N 'A', 3900 )) insert into test VALUES (2, REPLICATE (N 'B', 3900) insert into test VALUES (3, REPLICATE (N 'C', 3900 )) insert into test VALUES (4, REPLICATE (N 'D', 3900) insert into test VALUES (5, REPLICATE (N 'E', 3900 )) insert into test VALUES (6, REPLICATE (N 'F', 3900) insert into test VALUES (7, REPLICATE (N 'G', 3900 )) insert into test VALUES (8, REPLICATE (n'h', 3900 )) set @ I = @ I + 1 endgodelete from test where a <> 5go -- Create Table extentinfo to store partition information if OBJECT_ID ('extentinfo ') is not nullddrop table extentinfogocreate table extentinfo (file_id smallint, page_id int, pg_alloc int, ext_size int, obj_id int, index_id int, interval int, partition_id bigint, interval varchar (50 ), pfs_bytes varbinary (10) gocreate proc export dbcc extentinfo ('testdb') goinsert extentinfoexec values, obj_id, index_id, partition_id, ext_size, 'actual _ extent_count '= COUNT (*), 'actual _ page_count '= SUM (pg_alloc), 'possible _ extent_count' = CEILING (SUM (pg_alloc) * 1.0/ext_size ), 'possible _ extents/actual_extents '= (CEILING (SUM (pg_alloc) * 1.00/ext_size) * 100.00)/COUNT (*) fromextentinfogroup byFILE_ID, obj_id, index_id, partition_id, ext_sizehaving COUNT (*)-CEILING (SUM (FIG) * 1.0/ext_size)> 0 order bypartition_id, obj_id, index_id, FILE_ID/* release/actual_extents limit 121370586490720575940389765128998411551551.603206412 */select object_name (2137058649) as TName/* TName -------------- test */-- in this case, we can find the table that has no space to release, -- in this case, we need to rebuild these objects.