SQL Server performance Tuning-separating the reporting database from the business database

Source: Internet
Author: User
Tags microsoft sql server mssql

In the previous period, the company's main database was cut into business and reporting libraries, and the business Library was replication to the report library in real time. This project was launched to improve the performance and maintainability of the system, and now review the design considerations and the work done as a reminder.

Project Origins

In the daily development process, functionality is always preceded by performance considerations. It is only when users complain about the performance of the system that we begin to address these performance issues in a piecemeal manner.

The company's CRM and ERP system, known as Olite, was developed entirely by our team. From scratch, the function expands continuously, originally only the CRM module, later joins the ERP module, accounting function and the report function. What's happening recently is that when some users run a big report, the users who are doing business are feeling the system responding very slowly. By monitoring the performance of the system, a large number of locks are generated in the database at these times, and spikes in CPU and memory resource consumption occur on the server.

System structure

The performance problem stems from the whole structure and development process of the system. The application of the Olite system is a Web Form program based on the. NET platform, and the database is SQL Server 2005. Its main structure is as follows:

Its application end consists of two sites: Olitebase and Olitereport, but the same database is connected.

The application end of Olite is very thin, and the large amount of business logic is packaged in the stored procedure and placed on the database side to run.

This structure performs well at first, and the report provided to the user is real-time business data. But with the increase in the number of business modules provided, especially the report, the database becomes a bottleneck because of the many tables that are connected by the stored procedure, the large amount of computation, and the result set of the output.

First, we did the optimization of the stored procedure by creating a trace to capture the poor performance of the stored procedure and optimize it. We've been doing this for a while, but the gains have been modest. While we are optimizing our previous stored procedures, as new features of the system go live, new stored procedures Enter the list that needs to be optimized.

Second, modify the database design, which includes modifying the table structure and optimizing the index. In the system local reconstruction table structure and relationship for performance improvement is still more obvious, but such changes will cause application end of a large number of modifications, heavy workload, high risk, so can not be implemented on a large scale. For index optimization, the Business module (olitebase) requires not too many indexes in the database to support efficient insertions, modifications, and deletions, while the report module (Olitereport) wants to have more indexes in the database to support efficient reads.

Finally, we tried to provide a late-day report service to divert the pressure from the main database. Daily by restoring the previous day's backup database on another server, and providing the OLITEREPORT2 site on this server, the user is provided with the report service. But the user does not like to use the OLITEREPORT2, the reason analysis down has 3 aspects: first, sometimes the user actually needs the real-time report. Second, Olitereport can store the user's report condition, and OliteReport2 will be refreshed every day to keep these conditions. Third, the user is more accustomed to open the original report link.

Project requirements

None of the above optimization schemes have fundamentally solved the performance problems of the system. In this context, we have the idea of separating the reporting database from the business database.

Requirements for this project:

1. Improve user experience of the overall system performance, the report module does not affect the operation of the business module.

2. Users can use the report module as before, i.e. without adding a new report site.

3. Users can store the completed report conditions as they were originally used for reuse.

4. Provide the report with the smallest delay possible.

Demand 1 is the main goal of the project, and Requirements 2, 3, 4 are as far as possible to ensure that the changes brought about by the project are transparent to the user.

Solution Selection

For the original system structure, its application end is already two independent sites Olitebase and Olitereport. So as long as the olitebase and Olitereport database separation, in the separation of the two database data synchronization between the line. The key here is how to synchronize between databases.

Microsoft offers a wide range of data synchronization options: 1. Cluster, 2.Log shipping;3. Replication;4. Mirror;5. Integration Service.

Most of the solutions Microsoft offers are for high-availability databases, and our projects are high-performance targets. In order to meet our own needs, should choose that kind of plan, and make what changes?

1. Cluster

This is the first proposal that we have rejected. Configuring a SQL Server DB cluster has a number of limitations on hardware, and the configuration is complex compared to other scenarios. Our project has a total of two server resources, in addition to the original primary database server, and the other is a virtual machine.

2.Log Shipping

Log shipping logs The primary database to the slave database and plays back from the database to ensure that the data is consistent between the master and the database, and is read-only from the database. Log Shipping also has a simple configuration and is a candidate for us at the beginning, but two problems were found in the further experiment. The first, Log shipping can be set to the minimum time interval in minutes. Second, when log playback is performed from the database, the connection to this database needs to be disconnected. The second problem is intolerable, and the programme has been eliminated.

3.Replication

The replication principle is somewhat similar to the log shipping, but it provides more flexibility. Replication can be performed on only a few tables, functions, or stored procedures in a multi-master database, even for certain qualifying records. In addition, its replicated database is writable, and the minimum time interval for replication can be configured as concurrent (the test time is delayed to the second level), and its configuration is simpler. After some experimentation, we finally chose it. Further discussion of its principle and configuration is discussed later.

4.Mirror

Mirror is a powerful high-availability scenario provided by SQL Server 2005. Its mirror database cannot be read directly, which is not in accordance with our requirement scenario, so it is no longer.

5.Integration Service

The integration service provides maximum flexibility for data extraction, conversion, and loading of data warehouses. However, the use of integration service requires a lot of development and testing work, so we do not choose.

Replication Solution Breakdown

Replication scheme can be divided into snapshot Replication, transactional Replication, Peer-2-peer Replication, Merge Replication.

Snapshot Replication: Typically used for a one-time, full replication of a database.

Transactional Replication: Used for one-way replication from the primary database to the database.

Peer-2-peer Replication: The ability to replicate between two or more databases, that is, from a database to a primary database, is powerful, but can cause conflicts and requires special attention to ensure the integrity of the data in each library.

Merge Replication: You can merge data from multiple databases and copy them to the target database.

For our needs, we have chosen the most simple transactional Replication.

Transactional Replication Principle

There are 3 roles in transactional replication: publisher (publisher), Distributor (distributor), Subscriber (subscriber). The logical diagram is as follows:

Before the incremental transactional replication, subscriber needs to be initialized to include the same table structure and initial data as publisher.

After the transactional replication is started, the Log Reader Agent on the Distributor will read the log information of Publisher and sort out the insert, UPDATE, which is identified as replication, Delete statement. These transaction are then copied to distributor and written to the distribution database. Finally, the Distribution Agent transports the transaction on the Distributor to subscriber for replay.

Note: In the figure, the Distribution Agent runs on the Distributor, which is the case in push mode. Can be configured as pull (drop) mode, the Distribution Agent will run on Subscriber.

More about the transactional replication principle can be consulted:

http://msdn.microsoft.com/en-us/library/ms151706 (sql.90). aspx

Configuration and considerations in the project

In the previous system Structure section, the original system structure was given. We want to get the system structure shown below through this project:

The Olitereport can be connected to a separate database that is copied from the primary database so that the locks between the two libraries are quarantined. At the same time the primary database and from the database are arranged on two servers (in the project we put the replicated database on a virtual machine), then the consumption of CPU resources and memory resources is also isolated. It is important to note that in the figure olitereport there is a small amount of write operations in addition to the main read operations (this is because the user can store the report condition). We write these back to the main database and get the data from the database at the next moment of replication.

There are three logical roles in transactional replication, and there are only two servers in the project. How do we arrange for these three logical roles?

    • There are two candidate scenarios: 1. Configure publisher and Distributor on the primary database, and configure subscriber;2 from the database. Only Publisher is configured on the primary database, and distributor and Subscriber are configured from the database. The focus of the contradiction is where distributor put it? It should be noted that distributor is very important for replication, which is responsible for fetching transaction from the main database, and in push mode it also needs to push transaction to a subscriber. These jobs consume the CPU and memory resources of the server on which it resides. Our project wants to ensure the performance of the business module as much as possible, so we chose Scenario 2 to configure the Distributor on the database.

Do we choose push mode or Pull mode?

    • Push and pull are actually about the way distributor transmits transaction to Subscriber (which I realized later, Beginning to think that push or pull will affect distributor fetching information on Publisher. For distributor and Subscriber on a single server, the effects of these two modes are basically the same. We chose the pull mode, which means that the Distribution Agent runs transaction data from distributor on the Subscriber side. This is for future expansion, and if you add a server later as Subscriber, Distributor will not add too much performance stress.

Another question to consider is what to copy?

    • Transactional replication can choose which tables, stored procedures, or functions to replicate. The simplest is to copy all the elements in the entire database, but this will cause many objects to be monitored by the Replication service, and the amount of information transmitted over the network is large. In the project we finally decided to just copy all the tables, so doing is going out of performance considerations. This will have an impact on future release, and attention will be given to the discussion below.

There is also a need to consider how to initialize from the database?

    • Before transactional replication begins, the first thing to do is to initialize from the database so that it gets the same table structure and initial data as the primary database. In the configuration transactional replication there is an option to initialize (done by the Snapshot Agent). But in our experiment it took a few hours to initialize, so we didn't use the default initialization of transactional replication, but we did initialize it with the database backup, and to do that we need to change some of the configuration options, which are covered later.

What are some prerequisites for transactional replication?

    • The compatibility level (compatibility Class) of the database needs to reach SQL Server 2005 (90) (We are using SQL Server 2005, and when the compatibility level is 80 o'clock, an exception occurs during configuration).
    • The recovery Model (recovery mode) of the database needs to be full (complete).
    • All tables that require replicate must have a primary key. (This should be taken for granted, but in this configuration there are some very "shameful" things)
    • In a stored procedure or other script, the replicate table cannot be truncate, and the statements in the corresponding stored procedure should be changed to delete. This is because replication is based on the capture and parsing of log, but truncate does not produce log.
    • If the replicate element also includes a stored procedure or function, there are other prerequisites that we do not expand here and can view MSDN.

How to configure transactional Replication?

    • Microsoft offers a very easy-to-use graphical interface that can be replication configured. However, the flexibility of the graphical configuration is limited, and some configuration options cannot be completed under a graphical interface. My suggestion is to first use the graphical configuration of the replication and generate the corresponding script. After that, modify the script as needed and configure it with script. That's what we do in our projects.
    • By default, the D:\Program Files\Microsoft SQL Server\mssql.1\mssql\repldata of the Distributor server holds replicate data. Because we want to support pull mode, we need to share this folder and set up a domain account with Full Control permission for this folder. This domain account is set as the running account for the SQL Server Agent service on the Subscriber server, and this service needs to be set to automatic boot mode. The @working_directory parameter in Sp_adddistpublisher is set as the network path for this shared directory.
    • When you configure publisher, the sp_addpublication parameter @sync_method = n ' concurrent ', @repl_freq = n ' continuous ' ensures that replicate can be as real-time as possible; @allow_ Initialize_from_backup = N ' true ' means that the initialization from the database is also made from the backup.
    • When configuring subscriber, the sp_addsubscription parameter @sync_type = n ' replication support only ' means that initialization from the database is done entirely externally; @subscription_type = n ' Pull ' indicates the use of pulling mode.

Post-Maintenance

How do I monitor the performance and exceptions of replication?

    • Microsoft provides the replication Monitor. This tool is still relatively useful, you can see the status of publication and subscription, but also see how many transaction are currently waiting to be transmitted.
    • Transactional Replication is set up, Distributor will automatically generate multiple alerts related, such as Replication warning:subscription expiration ( threshold:expiration), Replication warning:transactional Replication Latency (threshold:latency) and so on. You can bind these alerts to Database Mail. When a warning occurs, the message is sent automatically. (This feature was configured in the project but never sent a warning message, and I don't know why, if anyone knows, you can contact me.)

How do I proceed with the release?

    • The release of the original database is generally divided into three parts: 1. Changes in table structure (including addition/deletion of tables, plus/delete); 2. Load configuration data (such as adding configuration data for new features); 3. Brush functions and stored procedure scripts.
    • For the replication database in this project, the following points should be noted in the release process: 1. If the newly added table needs to be replication, you will need to configure the table to replicate and initialize it, in addition to creating the table in the primary database. 2. To delete a table in replication, first cancel the replication of this table, and then drop the table in the main, from the library. 3. If you need to add/delete columns of the replication table (this column cannot be a primary key column), you can execute the script directly on the primary database, and the changes will be automatically replicate to the slave database. 4. The loading of the configuration data is only required to complete the primary database. 5. Functions and stored procedures need to be refreshed both in the master and from the library.

Summary and assumptions

The project is on-line and basically meets the target of demand, but this is only the beginning, and the optimized structure provides a basis for future expansion of the system.

    • Experiments have found that different indexes can be created on the master/slave database without interfering with each other (this is related to replication configuration). This makes it possible to create more optimized indexes based on the different usage patterns of the master and from the database. I saw on a foreign blog that using the dynamic view of SQL Server 2005, indexes are automatically created based on the usage patterns of the database, just like adaptive indexing machines. That's what I'm going to do on the Olitereport database.
    • In the future, if you have more than one Subscriber database, you can also do olitereport database load Balance. When there is a report request, the system first looks at the CPU and memory load of each subscriber and chooses the lighter subscriber to accept the report request.
    • We can also use the Replicate database for BI (business intelligence) analysis and mining without affecting the operation of the primary database.

SQL Server performance Tuning-separating the reporting database from the business database

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.