Http://soft.chinabyte.com/database/215/12374715.shtml
.
Software and services
We are also here:
. Enterprise computing Big Data storage software and services database/Development server operating system network security
. You are here: Bit network > Software and Services > body.
Share a SQL Server script
Date: 2014-07-17 16:21:00 Source: Forum translation.
. Keywords: database
Many times we need to calculate the amount of data in each table in the database and the space occupied by each row of records.
Here's a script to share:
CREATE TABLE #tablespaceinfo (nameinfo varchar, rowsinfo BIGINT, reserved varchar, datainfo varchar), Inde X_size varchar (unused) DECLARE @tablename varchar (255); DECLARE info_cursor cursor for SELECT ' [' + [name] + '] ' from sys.tables WHERE type = ' U '; OPEN info_cursor FETCH NEXT from Info_cursor to @tablename while @ @FETCH_STATUS = 0 BEGIN INSERT into #tablespaceinfo EX EC sp_spaceused @tablename FETCH NEXT from Info_cursor to @tablename END CLOSE info_cursor deallocate info_cursor--Create temporary Table "CREATE table [#tmptb] (TableName VARCHAR (), Datainfo BIGINT, Rowsinfo BIGINT, Spaceperrow as (case Rowsinfo when 0 Then 0 ELSE datainfo/rowsinfo END)--insert data into temp table insert into [#tmptb] ([TableName], [Datainfo], [rowsinfo ]) SELECT [Nameinfo], CAST (REPLACE ([Datainfo], ' KB ', ') as BIGINT) as ' Datainfo ', [Rowsinfo] from #tablespaceinfo Orde R by CAST (REPLACE (reserved, ' KB ', ') as INT) DESC--Summary record SELECT [tbspinfo].*, [TMPTB]. [Spaceperrow]The as ' per-row record probably occupies space (KB) ' from [#tablespaceinfo] as Tbspinfo, [#tmptb] as TMPTB WHERE [Tbspinfo]. [Nameinfo] = [TMPTB]. [TableName] ORDER by CAST (REPLACE ([tbspinfo].[ Reserved], ' KB ', ') as INT) DESC drop table [#tablespaceinfo] DROP table [#tmptb]
Note: Use the database that you want to count the statistics on before using the record of which database to calculate!! problems encountered in the work can say I in the actual work, in 100 questions 90 will first use this script. Here is a list of some of the problems I have encountered in my own work. Problem One: programmer reflects slow database query, 5 minutes has not produced results. I'll start with this script to see how many records this table has, presumably with 1000w+ data. Then in the local SSMs query, it is really about 4 minutes to come out of the data, look at the execution plan, found that the query can use the index. Look at the pressure of the database is not very large, I will not be related to the amount of data? Programmers want to query the number of results is 500 data, the business table is partitioned, according to reason should not slow down into such ... Later I looked at the results of the shared script and found the result size of the query = the size of each row of records * number of records. to query about 500MB of data, and then upload to the client, not slow to blame. Why is the result of this query so large? Mainly has a few big words: For example: Binary field and nvarchar (MAX) and time span is relatively large immediately call the programmer to change the query statement, because it is the Entity Framework program, how to change I am not very clear, mainly Fields that are not necessary are not queried for processing and shrink the time range. Problem Two: There are also some problems need to know the size of each row of records, such as delete the history of the table, QA said to keep the data before 2013, you need to find out the retained data or the data before 2013 how much g space. combines the available disk space on the current server to assess whether the deleted data is too much or too little. Then the process is: first find out how many records before 2013-the total number of records in the calculation table-the size of the calculated table-"Manually calculate the size of each row of records-" multiplied by the number of records before 2013. If you do not have the number of records per row This field, then you manually calculate, is not efficiency becomes slower??? Question three: Guide the data, you want to know how much data has been in the current, then execute this script can be, the script is basically not blocked. will soon be able to find out the results. How to calculate the script method one actually uses the data row size information divided by the number of records case Rowsinfo while 0 then 0 ELSE datainfo/rowsinfo Method two SELECT AVG (datalength (C0)) +avg (Datalength (C1)) +avg (Datalength (C2)) +avg (Datalength (C3)) from [dbo]. [TB106] say the difference between the two methods: The first method is high efficiency, when the table has billions of records, if you use the second method to perform AVG (datalength (C0)) is very slow, because SQL Server to count the field size information. may not be able to produce results for more than 10 minutes. Of course, the first method also has some drawbacks, that is, when the number of records in the table is small, the statistics of each row of records occupy space is not accurate. Because datainfo this value is in the data page size, because even if the table has only one record, it will also occupy a data page (8KB) Then when 8kb/1 =8kb, a record is certainly not 8KB, so the record is inaccurate. But when the number of records is large, it is accurate.
Take a look at the result values TB106 this table
SELECT AVG (datalength (C0)) +avg (Datalength (C1)) +avg (Datalength (C2)) +avg (Datalength (C3)) from [dbo]. [TB106]
Can see is more accurate Note: Either method one or method two does not include the space occupied by the index!!
Summary Everyone will always think: what is the role of DBA? Here to give you an example, in the work, the programmer will not care about the size of the data he wants to query, he no matter 3,721 only to select the data out of the line, and then finish. DBA here to solve the problem of data query, the average programmer think query 500 data is very few, do not care about table design, table of the data type of the field. When more and more work is done, the development task is more and more heavy. So I think the DBA role is still more important O (∩_∩) o If there is no place, welcome everyone to make brick O (∩_∩) o 2014-7-7 script bug fix because of the accuracy of each row of records is a problem, I Improved accuracy of the script CREATE TABLE #tablespaceinfo (nameinfo varchar, rowsinfo BIGINT, reserved varchar), Datainf o varchar (index_size), varchar (unused), DECLARE @tablename varchar (255); DECLARE info_cursor cursor for SELECT ' [' + [name] + '] ' from sys.tables WHERE type = ' U '; OPEN info_cursor FETCH NEXT from Info_cursor to @tablename while @ @FETCH_STATUS = 0 BEGIN INSERT into #tablespaceinfo EX EC sp_spaceused @tablename FETCH NEXT from Info_cursor to @tablename END CLOSE info_cursor deallocate info_cursor--Create temporary Table "CREATE table [#tmptb] (TableName VARCHAR (), Datainfo BIGINT, Rowsinfo BIGINT, Spaceperrow as (caseRowsinfo when 0 then 0 ELSE CAST (datainfo as Decimal (18,2))/cast (Rowsinfo as Decimal (18,2)) END) PERSISTED)--Insert data to temp table I Nsert into [#tmptb] ([TableName], [Datainfo], [rowsinfo]) SELECT [Nameinfo], CAST (REPLACE ([Datainfo], ' KB ', ') as BI GINT) as ' Datainfo ', [Rowsinfo] from #tablespaceinfo ORDER by CAST (REPLACE (reserved, ' KB ', ') as INT) DESC--Summary record SELECT [Tbspinfo].*, [TMPTB]. [Spaceperrow] The as ' per-row record probably occupies space (KB) ' from [#tablespaceinfo] as Tbspinfo, [#tmptb] as TMPTB WHERE [Tbspinfo]. [Nameinfo] = [TMPTB]. [TableName] ORDER by CAST (REPLACE ([tbspinfo].[ Reserved], ' KB ', ') as INT) DESC drop table [#tablespaceinfo] DROP table [#tmptb]
The original source from the "bit network", reproduced please keep the original link: http://soft.chinabyte.com/89/13018589.shtml