SQL Server practical experience and tips

Source: Internet
Author: User
Tags repetition

(1) pending operations

When installing the SQL or sp patch, the system prompts that a pending installation operation is required. Restart is often useless. solution:

To HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession Manager
Delete PendingFileRenameOperations

(2) shrinking the database


-- Re-Indexing
DBCC REINDEX
DBCC INDEXDEFRAG
-- Shrink data and logs
DBCC SHRINKDB
DBCC SHRINKFILE



(3) compressing the database


Dbcc shrinkdatabase (dbname)



(4) Transfer the database to a new user with existing User Permissions


Exec sp_change_users_login update_one, newname, oldname
Go



(5) Check the backup set


Restore verifyonly from disk = E: dvbbs. bak



(6) database Restoration


Alter database [dvbbs] SET SINGLE_USER
GO
Dbcc checkdb (dvbbs, repair_allow_data_loss) WITH TABLOCK
GO
Alter database [dvbbs] SET MULTI_USER
GO

-- CHECKDB has three parameters:

-- REPAIR_ALLOW_DATA_LOSS



-- Execute all the repairs completed by REPAIR_REBUILD, including allocating and unassigning rows and pages to correct the allocation errors, structure row or page errors, and deleting corrupted text objects. These fixes may cause some data loss. The repair operation can be completed under the user transaction to allow the user to roll back the changes. If the rollback is fixed, the database will still contain errors and the backup should be restored. If an error fix is missing due to the severity of the severity level provided, it will omit any fix that depends on the fix. After the restoration, back up the database.

-- REPAIR_FAST Performs small and time-consuming repair operations, such as fixing the additional keys in non-clustered indexes. These repairs can be completed quickly without the risk of data loss.

-- REPAIR_REBUILD: execute all the repairs completed by REPAIR_FAST, including repairs that require a long period of time (such as re-indexing ). There is no risk of data loss when performing these repairs.


-- Dbcc checkdb (dvbbs) with NO_INFOMSGS, PHYSICAL_ONLY



Two Methods for clearing SQL SERVER logs
During usage, we often encounter very large database logs. Here we introduce two solutions ......

Method 1

In general, the contraction of the SQL database does not greatly reduce the size of the database. Its main function is to shrink the log size. This operation should be performed regularly to avoid excessive database logs.

1. Set database mode to simple mode: Open the SQL Enterprise Manager, in the root directory of the console, open Microsoft SQL Server --> SQL Server group --> double-click your Server --> double-click to open the database directory --> select your database name (such as the Forum database) forum) --> then right-click and select Properties --> Select Options --> select "simple" in the fault recovery mode, and click OK to save.

2. Right-click the current database to view the shrinking database in all tasks. Generally, the default settings in the database do not need to be adjusted. Click OK directly.

3. After shrinking the database, we recommend that you set your database attributes to the standard mode. The operation method is the same as the first one, because logs are often an important basis for restoring the database in case of exceptions.

Method 2


SET NOCOUNT ON
DECLARE @ LogicalFileName sysname,
@ MaxMinutes INT,
@ NewSize INT

USE tablename -- Name of the database to be operated
SELECT @ LogicalFileName = tablename_log, -- Log File Name
@ MaxMinutes = 10, -- Limit on time allowed to wrap log.
@ NewSize = 1 -- the size of the log file you want to set (M)




-- Setup/initialize
DECLARE @ OriginalSize int
SELECT @ OriginalSize = size
FROM sysfiles
WHERE name = @ LogicalFileName
SELECT Original Size of + db_name () + LOG is +
CONVERT (VARCHAR (30), @ OriginalSize) + 8 K pages or +
CONVERT (VARCHAR (30), (@ OriginalSize * 8/1024) + MB
FROM sysfiles
WHERE name = @ LogicalFileName
Create table DummyTrans
(DummyColumn char (8000) not null)




DECLARE @ Counter INT,
@ StartTime DATETIME,
@ TruncLog VARCHAR (255)
SELECT @ StartTime = GETDATE (),
@ TruncLog = backup log + db_name () + WITH TRUNCATE_ONLY

Dbcc shrinkfile (@ LogicalFileName, @ NewSize)
EXEC (@ TruncLog)
-- Wrap the log if necessary.
WHILE @ MaxMinutes> DATEDIFF (mi, @ StartTime, GETDATE () -- time has not expired
AND @ OriginalSize = (SELECT size FROM sysfiles WHERE name = @ LogicalFileName)
AND (@ OriginalSize * 8/1024)> @ NewSize
BEGIN -- Outer loop.
SELECT @ Counter = 0
WHILE (@ Counter <@ OriginalSize/16) AND (@ counter< 50000 ))
BEGIN -- update
INSERT DummyTrans VALUES (Fill Log)
DELETE DummyTrans
SELECT @ Counter = @ Counter + 1
END
EXEC (@ TruncLog)
END
SELECT Final Size of + db_name () + LOG is +
CONVERT (VARCHAR (30), size) + 8 K pages or +
CONVERT (VARCHAR (30), (size * 8/1024) + MB
FROM sysfiles
WHERE name = @ LogicalFileName
Drop table DummyTrans
SET NOCOUNT OFF

Several Methods for deleting duplicate data in a database

During database usage, due to program problems, duplicate data may occur, which leads to incorrect database settings ......

Method 1


Declare @ max integer, @ id integer
Declare cur_rows cursor local for select Main field, count (*) from table name group by main field having count (*)> 1
Open cur_rows
Fetch cur_rows into @ id, @ max
While @ fetch_status = 0
Begin
Select @ max = @ max-1
Set rowcount @ max
Delete from table name where primary field = @ id
Fetch cur_rows into @ id, @ max
End
Close cur_rows
Set rowcount 0


Method 2

There are two Repeated Records. One is a completely repeated record, that is, records with all fields being repeated, and the other is records with duplicate key fields, such as duplicate Name fields, other fields are not necessarily repeated or can be ignored.

1. For the first type of repetition, it is easier to solve.


Select distinct * from tableName



You can get the result set without repeated records.

If the table needs to delete duplicate records (one record is retained), you can delete the record as follows:


Select distinct * into # Tmp from tableName
Drop table tableName
Select * into tableName from # Tmp
Drop table # Tmp


The reason for this repetition is that the table design is not weekly. You can add a unique index column.

2. Repeat problems usually require that the first record in the repeat record be retained. The procedure is as follows:
Assume that the duplicate fields are Name and Address. You must obtain the unique result set of the two fields.


Select identity (int, 1, 1) as autoID, * into # Tmp from tableName
Select min (autoID) as autoID into # Tmp2 from # Tmp group by Name, autoID
Select * from # Tmp where autoID in (select autoID from # tmp2)


The last select command gets the result set with no duplicate Name and Address (but an autoID field is added, which can be omitted in the select clause when writing)

Two Methods for changing the user of a table in the database
You may often encounter a problem where all the tables cannot be opened due to the result of restoring a database backup to another machine. The reason is that the current database user was used during table creation ......

-- Change a table


Exec sp_changeobjectowner tablename, dbo


-- Store and change all tables


Create procedure dbo. User_ChangeObjectOwnerBatch
@ OldOwner as NVARCHAR (128 ),
@ NewOwner as NVARCHAR (128)
AS

DECLARE @ Name as NVARCHAR (128)
DECLARE @ Owner as NVARCHAR (128)
DECLARE @ OwnerName as NVARCHAR (128)

DECLARE curObject CURSOR
Select Name = name,
Owner = user_name (uid)
From sysobjects
Where user_name (uid) = @ OldOwner
Order by name

OPEN curObject
Fetch next from curObject INTO @ Name, @ Owner
WHILE (@ FETCH_STATUS = 0)
BEGIN
If @ Owner = @ OldOwner
Begin
Set @ OwnerName = @ OldOwner +. + rtrim (@ Name)
Exec sp_changeobjectowner @ OwnerName, @ NewOwner
End
-- Select @ name, @ NewOwn

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.