SQL Server Database optimization Combat (iii)

Source: Internet
Author: User

Objective:

This chapter focuses on the compression of shrink logs and tables.


Shrink log files

-- Use

exec sp_spaceused

Statement to view the database size

-- Right-click Database Properties

-- View Options

-- set the recovery mode to simple



-- Right-click Database - Task - Shrinkage - file



-- file type selection log



-- View Shrink Database size



-- Right-click Database Properties - Options

-- set the recovery mode to full

-- Note: A full database backup is required at this time


Table compression

--sql Server 2005 and above support table partition

Table partition specific operations are described in the following URLs:

http://blog.csdn.net/yole_grise/article/details/18658949


--SQL Server and the above version supports table compression

(Standard Edition is not available for table compression)

-For our mainstream customers, with the accumulation of time, the expansion of business scale, the headquarters of the database is growing, a lot of negative impact.

For example, a chain of 100 stores, three years down the database may reach 100G to 200G.

The main effects due to database enlargement are as follows:

1, resulting in higher disk storage costs; a 200G database plus backup requires at least 1 T disks to be barely enough.

2, the database performance degradation.

3, the potential risk of database failure increased. (although there is no direct evidence, but simple reasoning can draw that conclusion)

4, the time to back up the database is too long. Copying and restoring backup files is cumbersome.

Like now we have a lot of customer databases are above 100G. In this context, "table compression" shines.


Operation Method:

-- Right-click Table - Storage - Manage Compression


-- compressed Front table size:

-- look at the table size:

-- table size is determined by 81344KB Compress to 21328KB .

The compressed table size is usually the size of the original table. Quarteraround

-- index compression is the same thing

-- Right-click Index - Storage - Manage Compression

now, our bottleneck for database application is mainly in the disk IO above. That is, the efficiency of reading and writing disks.

And through the table compression, just can reduce the burden of the disk, so through the actual application, table compression effect is very obvious.

when the database exceeds 100G , by compressing the largest number of tables (and indexes), you can reduce the database to 30G around,

Speed is significantly improved, and management is more convenient.

But on the other hand, compression also has a negative effect.

Compression increases CPU overhead because of the constant compression algorithm and decompression algorithm.


Here's the problem:

What kind of table needs to be compressed?

--View the table size if object_id (' tempdb. #TB_TEMP_SPACE ') is a not NULL DROP table #TB_TEMP_SPACEGOCREATE table #TB_TEMP_SPACE (NAME VARCHAR), ROWS int,reserved V Archar (), DATA VARCHAR (index_size), UNUSED varchar () gosp_msforeachtable ' INSERT into #TB_TEMP_ SPACE exec sp_spaceused '? ' Goselect *, ' ALTER TABLE [dbo]. [' +name+ '] REBUILD PARTITION = allwith (data_compression = PAGE) ' As Sqlfrom #TB_TEMP_SPACEORDER by REPLACE (DATA, ' KB ', ') +0 DESC

Answer:

1 , the query frequency is small.

2 and occupy a large space

Table compression is required for such tables.

Typical representative table: U_sale_c

in general, tables are larger than 1G or index greater than 1G , all of which need to be compressed.

Note: Clustered indexes do not need to be compressed. Because the clustered index itself is not space-intensive.


The difference between compression and contraction:

Compression refers to reducing the data size of a database through algorithms and rules.

Contraction is the release of the free space of the database into the available space of the operating system.

Example :

a database has 100G , we do table compression, and after compression only 40G , but at this point , MDF file is still 100G .

Execution sp_spaceused will find that unallocated space in a 60G , and this is unallocated space.

So, we have to shrink the database to make this 60G The space is released.

So , the table compression operation and the shrinking database are two closely connected operations, only in this way can we achieve the desired effect.

---------------------------Gorgeous split-line-----------------------------------

End of chapter






SQL Server Database optimization Combat (iii)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.