I. Purpose
Previously, in the blog post SQL Server database minimal downtime migration solution, we mentioned that the full backup + differential backup feature has been used to complete database transfer, however, this method is obviously not applicable when we encounter more than 700 GB of data, so in this article, how do I migrate a database of GB to a new server.
Ii. Analysis and Design Ideas
(1) Environment Description
Our database uses SQL Server 2005 and is deployed on Windows Server 2003 x86 operating system. One Table occupies most of the space of this database.
In the face of the above situation, our database is under a lot of pressure, so we plan to find another machine in the same cluster and transfer the data in this database, optimize the database by setting parameters for the new server.
(2) Data Analysis
When we get a database, we should check the relevant information of the database and make a preliminary evaluation after understanding the database conditions and parameters, for example, we need to know how much space the tables in this GB database occupy and how much space the index occupies. There is an SQL statement that can directly view the information) and whether the table partition is performed.
When you know the parameters, you can check the server hardware information, such as memory, hard disk, RAID policy, operating system, database version, memory pressure, CPU pressure, and so on. Understanding this information is an important factor for us to decide whether to migrate to a new server.
If data migration is decided, it is appropriate to move the number of records at a time so that the production database can still enter data without affecting our production database, we also need to calculate this. The relocation Job should try to make the time interval bigger. If the previous Job has not been executed, the latter Job will not be executed even when the time is reached .)
(3) design ideas
Iii. Reference script
The following lists some key SQL statements for your reference.
-- 1.1: Create a table
- CREATE TABLE [dbo].[Temp_MoveManage](
-
- [Id] [int] NOT NULL,
-
- [IsDone] [bit] NOT NULL,
-
- [UpdateTime] [datetime] NULL,
-
- CONSTRAINT [PK_Temp_MoveManage] PRIMARY KEY CLUSTERED
-
- (
-
- [Id] ASC
-
- )WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
-
- ) ON [PRIMARY]
-- 1.2 Insert data
- declare @i int
-
- set @i=1
-
- while @i < 50000000
-
- begin
-
- insert into dbo.Temp_MoveManage values(@i,0)
-
- set @i = @i + 50000
-
- end
-- 1.3 Test
- select * from Temp_MoveManage
-- 2, link server (omitted)
-- 3. Stored Procedure
- SET ANSI_NULLS ON
-
- GO
-
- SET QUOTED_IDENTIFIER ON
-
- GO
-
- -- ===================================================== ======
-
- -- Author: <Viajar>
-
- -- Create date: <2011.04.14>
-
- -- Description: <transfer data>
-
- -- ===================================================== ======
-
- Alter procedure [dbo]. [sp_GetMoveData]
-
- AS
-
- BEGIN
-
- DECLARE @ Id1 INT, @ Id2 INT
-
- DECLARE @ MaxId INT -- maximum value of the original table
-
- SET @ Id1 = 0
-
- SET @ Id2 = 0
-
- Select top 1 @ Id1 = Id FROM Temp_MoveManage WHERE IsDone = 0 order by Id
-
- Select top 1 @ Id2 = Id FROM Temp_MoveManage WHERE IsDone = 0 AND Id> @ Id1 order by Id
-
- SELECT @ MaxId = MAX (Id) FROM [dbo]. [ClassifyResult]
-
- IF (@ Id1! = 0 AND @ Id2! = 0 AND @ MaxId >=@ Id2)
-
- BEGIN
-
- DECLARE @ SQL VARCHAR (MAX)
-
- SET @ SQL ='
-
- SET IDENTITY_INSERT [ClassifyResult_T] ON
-
- Insert into [dbo]. [ClassifyResult_T] (
-
- [Id]
-
- , [ClassId]
-
- , [ArchiveId])
-
- SELECT
-
- [Id]
-
- , [ClassId]
-
- , [ArchiveId]
-
- FROM [dbo]. [ClassifyResult]
-
- WHERE Id> = '+ CONVERT (VARCHAR (100), @ Id1) +' and Id <'+ CONVERT (VARCHAR (100), @ Id2) +'
-
- Order by Id
-
- SET IDENTITY_INSERT [ClassifyResult_T] off'
-
- EXEC (@ SQL)
-
- UPDATE Temp_MoveManage SET IsDone = 1 WHERE Id = @ Id1
-
- END
-
- END
-- 4, Job (omitted)
Iv. Defects
Defect 1:In the table generated by CreateTable, the last record cannot be executed. Because the last Id is <, NO = is used, the last record in the transferred table has not been transferred to the partition table;
Defect 2:If the records of the transfer table are constantly increasing, the data cannot be transferred to the partition table, the original idea is to add more records to the Id segment of the relocation auxiliary table so that the latest data can be executed;
Defect 3:The problem above is also flawed. For example, the current Id segments are 100 and 200. When the new data Id> 100 for a certain period of time, the IsDone of the two segment values will be updated to 1, resulting in a lot of missing data. This defect can also be solved, when the Id of the new data is greater than 200, scripts with IDs 100 and 200 are executed. Max (Id) can be determined during the storage process)
5. Note
Vi. Others
This is a suggestion from some friends. I haven't tried it yet because of Environment restrictions, for example, the database is in simple mode. We are looking forward to using suitable environment kids shoes.
Edit recommendations]