SQL Server implements automatic cyclic archiving of partition data scripts, SQL Server

Source: Internet
Author: User
Tags filegroup mssqlserver

SQL Server implements automatic cyclic archiving of partition data scripts, SQL Server

Overview

We should all know that in many business scenarios, We need to partition some tables with a large record volume, and archive some old data to ensure performance. In many partition tables, IF manual intervention is required for each archive, the workload is relatively large and leakage may occur. Next, I will share a script that I wrote to automatically archive partition data. The principle is that partition tables and archive tables use the same partition scheme and use the current file group cyclically, let's take a look at the detailed introduction.

1. Create Test Data

---- 01 create a file group USE [master] goalter database [chenmh] add filegroup [Group1] goalter database [chenmh] add filegroup [Group2] goalter database [chenmh] add filegroup [Group3] GOALTER DATABASE [chenmh] add filegroup [Group4] GOUSE [master] goalter database [chenmh] add file (NAME = n'datafile1 ', FILENAME = n'c: \ Program Files \ Microsoft SQL Server \ MSSQL12.MSSQLSERVER \ MSSQL \ DATA \ datafile1.ndf ', SIZE = 8192KB, FILEGROWTH = 65536KB) to filegroup [Group1] goalter database [chenmh] add file (NAME = n'datafile2', FILENAME = n'c: \ Program Files \ Microsoft SQL Server \ MSSQL12.MSSQLSERVER \ MSSQL \ DATA \ datafile2.ndf ', SIZE = 8192KB, FILEGROWTH = 65536KB) to filegroup [Group2] goalter database [chenmh] add file (NAME = n'datafile3', FILENAME = n'c: \ Program Files \ Microsoft SQL Server \ MSSQL12.MSSQLSERVER \ MSSQL \ DAT A \ datafile3.ndf ', SIZE = 8192KB, FILEGROWTH = 65536KB) to filegroup [Group3] goalter database [chenmh] add file (NAME = n'datafile4', FILENAME = n'c: \ Program Files \ Microsoft SQL Server \ MSSQL12.MSSQLSERVER \ MSSQL \ DATA \ datafile4.ndf ', SIZE = 8192KB, FILEGROWTH = 65536KB) to filegroup [Group4] GO--02 create partition function use [chenmh] gocreate partition function [Pt_Range] (BIGINT) as range right for values (1000 000,200 0000, 3000000) GO--03 CREATE a partition scheme. The number of file groups corresponding to the partition scheme is the number specified by the PARTITION Function + 1 create partition scheme Ps_RangeAS PARTITION Pt_RangeTO (Group1, Group2, group3, Group4); --- 04 when creating a table, the Data Type of the specified partition column must be consistent with the column type specified by the partition function. Create table [dbo]. [News] ([id] [bigint] not null, [status] [int] NULL, CONSTRAINT [PK_News] primary key clustered ([id] ASC) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [Ps_Range] (id) ON [Ps_Range] (id) ----- create table [dbo]. [NewsArchived] ([id] [bigint] not null, [status] [int] NULL, CONSTRAINT [PK_NewsArchived] primary key clustered ([id] ASC) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [Ps_Range] (id) ON [Ps_Range] (id) ---- INSERT test data DECLARE @ id int set @ id = 1 WHILE @ id <5001000 begin insert into News VALUES (@ id, @ id % 2) SET @ id = @ id + 1END

We can see that there are currently a total of 4 partitions, each of which defines a range of 1 million. Partition 4 I intentionally inserted more than 2 million more data to verify automatic archiving of partitions.

Ii. automatic archiving of partition scripts

Create procedure Pro_Partition_AutoArchiveData (@ PartitionTable VARCHAR (300), @ SwitchTable VARCHAR (300) ASBEGINDECLARE @ FunName VARCHAR (100), @ SchemaName VARCHAR (100 ), @ MaxPartitionValue SQL _variant --- find the corresponding Partition Scheme, partition function, minimum number of partitions, and maximum partition range value from the archive table SELECT DISTINCT @ FunName = MAX (pf. name), @ SchemaName = MAX (ps. name), @ MaxPartitionValue = max (isnull (prv. value, 0) FROM sys. partitions p inner join sys. indexes I ON p. object_id = I. o Bject_id and p. index_id = I. index_idinner join sys. partition_schemes ps ON I. data_space_id = ps. data_space_idinner join sys. destination_data_spaces dds ON ps. data_space_id = dds. partition_scheme_id and dds. destination_id = p. partition_numberinner join sys. data_spaces ds ON dds. data_space_id = ds. data_space_idinner join sys. partition_functions pf ON ps. function_id = pf. function_idLEFT join sys. partition_range _ Values prv ON pf. function_id = prv. function_id AND prv. boundary_id = p. A partition_number-pf.boundary_value_on_rightLEFT join sys. partition_parameters pp ON prv. function_id = pp. function_id and prv. parameter_id = pp. parameter_idLEFT join sys. types t ON pp. system_type_id = t. system_type_id and pp. user_type_id = t. user_type_idWHERE OBJECT_NAME (p. OBJECT_ID) = @ PartitionTableDECLARE @ MaxId BIGINT, @ MinId BIGINT, @ SQL N VARCHAR (MAX), @ GroupName VARCHAR (100), @ MinPartitionNumber INTSET @ SQL = n' SELECT @ MaxId = MAX (id), @ MinId = Min (id) FROM '+ @ PartitionTableEXEC sp_executesql @ SQL, n' @ MaxId BIGINT out, @ MinId BIGINT out', @ MaxId OUT, @ MinId OUTSELECT @ FunName AS FunName, @ SchemaName AS SchemaName, @ MaxPartitionValue AS MaxPartitionValue, @ MaxId AS MaxId, @ MinId AS MinId --- determine whether the largest id of the current table is already in the largest partition IF @ MaxId >=@ MaxPartitionValue BEGIN ---- Archive partition data and find the partition to which the table belongs based on the minimum value. SET @ SQL = n' SELECT @ MinPartitionNumber = $ PARTITION. '+ @ FunName + N' (' + CONVERT (VARCHAR (30), @ MinId) + N') '; EXEC sp_executesql @ SQL, n' @ MinPartitionNumber INT out ', @ MinPartitionNumber out set @ SQL = N 'alter table' + @ PartitionTable + N 'Switch partition' + CONVERT (VARCHAR (10), @ MinPartitionNumber) + n'to' + @ SwitchTable + N' PARTITION '+ CONVERT (VARCHAR (10), @ MinPartitionNumber); -- PR INT @ SQL EXEC (@ SQL) --- modify the partition scheme, add the file group corresponding to the new partition, and find the corresponding file group based on the smallest partition id. Select distinct @ GroupName = ds. name FROM sys. partitions p inner join sys. indexes I ON p. object_id = I. object_id and p. index_id = I. index_id inner join sys. partition_schemes ps ON I. data_space_id = ps. data_space_id inner join sys. destination_data_spaces dds ON ps. data_space_id = dds. partition_scheme_id and dds. destination_id = p. partition_number inner join sys. data_spaces ds ON dds. data_space_id = ds. data_space_id inner join sys. partition_functions pf ON ps. function_id = pf. function_id WHERE pf. name = @ FunName AND ps. name = @ SchemaName AND p. partition_number = @ MinPartitionNumber SET @ SQL = n' alter partition scheme '+ @ SchemaName + N' NEXT used' + @ GroupName -- PRINT @ SQL EXEC (@ SQL) --- modify the PARTITION function, add a new partition, add a new partition range value, and Add 1 million based on the current maximum value (must be consistent with the range of the existing Partition Function) SET @ MaxPartitionValue = CONVERT (BIGINT, @ MaxPartitionValue) + 1000000 SET @ SQL = n' ALTER PARTITION function' + @ FunName + N' ('+ N ') '+ N' split range (' + CONVERT (VARCHAR (30), @ MaxPartitionValue) + N') '-- PRINT @ SQL EXEC (@ SQL) ENDEND

Iii. automatic archiving of partition data

1. First test

EXEC Pro_Partition_AutoArchiveData 'news','NewsArchived';

Note:Archive the data of one smallest partition at a time.

The data of News partition 1 in the Partition Table is archived to the NewsArchived table, and partition 5 is created. Partition 5 uses the file group of archived partition 1, achieves the effect of recycling file groups.

2. Call the archive partition script again.

When the maximum id of a partition table is smaller than the maximum partition value, the automatic archiving partition script does not take effect. Therefore, you can archive partition 3 data in the current test table.

3. After a period of time, archiving data may be of this effect.

Group1 → Group4 → Group1 → .......

Iv. Script precautions

1. @ PartitionTable and @ SwitchTable must use the partition scheme and partition function with the same name. Otherwise, @ SwitchTable needs to modify the partition scheme and function separately, and the table structure is completely consistent.

2. The data type of the partition column of the archive table must be INT and the value is auto-incrementing.

3. The partition archiving job is executed after the backup job.

4. we recommend that you use the Right partition. Some of the last partition file groups in the Left partition will not be replaced cyclically and remain at the end of the partition, such as Group1, Group2, Group3, Group3, Group1, Group2, Group3, group4. The expected values are Group1, Group2, Group3, Group4, Group1, Group2, Group3, Group4, and Group1.

5. Note that the current partition size is 1 million, which is consistent with that of the partition function. If the range value is different, you need to modify the code in "Modify Partition Function" at the end of the Code.

Summary

Currently, if you want to copy and use the automatic archiving partition script, you still have to fully understand each piece of code and make appropriate modifications based on your own business. After all, the data is priceless !!!. At last, you only need to create a job to regularly run the job, and repeated execution does not affect.

Well, the above is all the content of this article. I hope the content of this article has some reference and learning value for everyone's learning or work. If you have any questions, please leave a message to us, thank you for your support.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.