Delve into the performance of new levels in Windows Azure SQL database

Source: Internet
Author: User
Keywords Azure azure

April 24, we released a preview version of the SQL Database base level (preview) and standard (preview) new service levels and new business continuity features. In this blog post, we delve into the performance of new levels in SQL Database.

Begin with the need for change. We focus on performance (specifically predictable performance) in new service levels, driven primarily by strong customer feedback on SQL Database Web-Level and enterprise-class performance. Web-and enterprise-level performance has been unpredictable and random, causing problems for customers running business-critical applications. We understand that the predictability of performance is important from a customer perspective. Customers tell us that if a platform's performance is not reliable for a long time, it is difficult to build a system that can be reliably run on this platform.

For this feedback, we have introduced the basic level (preview) and standard level (preview) In addition to SQL Database Advanced (preview version). The Advanced (preview) version of the preview was launched in July 2013, focusing on delivering predictable performance. Customer feedback on Advanced (preview) is very positive. The basic level (preview) and standard (preview) will also provide customers with this advantage, and there are new ways to charge. Customers can now leverage performance as a lever they can control, which is also in line with the needs of the current cloud computing industry. This leverage will be the main driver of the new model of "pricing by performance + functionality" for customers from the Web-tier and enterprise-class service model "per storage pricing".

As you test your application in the new service level, remember that we are currently in the preview phase, which means that we are still fine-tuning the new performance experience. We listen carefully to your feedback by releasing the preview and will further refine the new service level based on feedback. When you run performance tests, you may wish to develop a periodic rerun plan to ensure that you have a better understanding of the different performance levels available in these new service levels. We will do our best to publish more performance-related blog posts in the subsequent stages of these previews.

In fact, based on customer feedback, we have just made improvements to the new Basic (preview) and standard (preview) throughput of the week of Monday May 19:

Basic (Preview Version): 1 dtu–> 5 DTU

S1:5 a dtu–> 15 DTU

S2:25 a dtu–> 50 DTU

What do you mean by performance?

Database performance is a complex topic, often using a large number of metrics to describe its state. From an application perspective, what users really care about is:

Response time: The time that a particular query executes from start to finish.

Throughput: The overall work or execution speed at which the system can run at any point in time. Typically, the throughput rate is considered valid only if a single execution is completed within the specified response time.

There are many factors that affect response time and throughput, but they can be divided into two main categories, application design and hardware resources. The following are examples of common problems with performance troubleshooting for database applications:

Application design:

Has the correct set of numbers been created to provide the best platform for executing queries?

Is the correct transaction isolation level used? is the system waiting for resources to be locked unnecessarily?

Have multiple table batches or stored procedures been used to minimize round-trip traffic?

Hardware Resources:

Is there sufficient resources available to perform the workload?

How much resources do I need to allocate to different types of workloads that occur successively?

In terms of performance, the new SQL database service levels can help resolve problems related to hardware resources by allowing customers to upgrade to a performance level that provides more hardware resources. If you are using a scale-out design pattern (partition), you can assign new performance levels to different partitions based on workloads to provide different resources, such as assigning a higher performance level to a partition that contains a highly active customer. We are actively developing a detailed set of zoning guides to provide you with this help, please look forward to!

In addition, one of the best practices in cloud design is continuous optimization of application design. These investments help reduce the need to use higher performance levels and reduce the cost of running the database workload.

What are the hardware specifications?

The resources required to run the database workload can be attributed to various types of hardware resources, including CPU, disk IO operations (read/write), and memory. There are significant differences in each of the resources. For example, different types of kernels have different clock cycles and cache sizes, and the size of IO execution is small, depending on what the database system is doing. This task can be difficult and time-consuming to predict how much your application actually needs for each resource, and more than the long-term utilization. In an internal deployment environment, you have to control the hardware on which the database is running, and it is understandable that you perform the above tasks. One of the great attractions of moving to the cloud, especially with SQL database, is that you can focus on building powerful applications without having to manage hardware or patches and maintain database software. Customers tell us that they don't want to be so troublesome and just want the throughput level to keep up with their needs. This is also a vision of our ability to focus on performance.

Under the new service level, we have adopted new design principles to help ensure the predictability of performance. Each performance level (Basic level (preview), S1, S2, P1, P2, P3) is assigned to different resource groups, such as CPU, memory, IO, and so on. The maximum amount of resources that a database can use at each level is established. The rationale is that a database should be roughly equivalent to the performance that can be achieved when running on a dedicated computer with the same resource group as the corresponding performance level. The design principle provides the basis for predictable performance.

To understand resource variances at each level of performance, we introduced the concept of database throughput units (DTU). DTU is a mixed measure of the resources assigned to each performance level. For example, P1 has 100 dtu,s1 with 15 DTU, that is to say, S1 gets about 1/7 of P1. Based on the above design principles, you can foresee that a database running at the S1 level has a performance level similar to the performance that can be achieved at run time on a dedicated computer with CPU speed, memory, and IO capabilities of P1 level 1/7.

From a user and troubleshooting point of view, we demonstrate the percentage of your database workload based on the current performance level used for each resource, eliminating the hassle of understanding different types of IO, disk queue lengths, and making it easy to understand resource usage relative to the current performance level.

We present telemetry data for CPU, read and write usage in the primary database Sys.resource_stats view, and later add memory-related information to the preview version. This view contains data collected for an average of 5 minutes (5 minutes) for each resource, which is useful for measuring your long-term resource needs. This telemetry data can also be obtained from the portal site. For more details on the query sys.resource_stats and the DTU used by the prediction, you can read this MSDN article, which describes information about the new service level.

We know that providing powerful telemetry data is important for troubleshooting performance. Please expect us to provide additional enhancements during the preview.

Database Workload Lifecycle

The "Use database" behavior is not a single operation, but consists of many different types of operations. This sounds obvious, but it is important to determine how many resources are allocated to run the database workload when you calculate which performance level is appropriate for your database. The answer may not be as simple as "standard (preview) S2 @ 200 USD/month" (with a formal release price). One of the big advantages that the cloud (especially SQL Database) brings us is the ability to scale flexibly. For example, "Monday to Thursday and weeks 6th use Basic (preview) to respond to a number of queries, Friday use S2 to manage larger batches of work running within a specific time window, one day each month using P1 to quickly perform large-scale ETL operations:

(Basic (preview) Price/30 * 25) = 4.16 USD

(S2 Price/30 * 4) = 26.7 USD

(P1 price/30 * 1) = 31 USD

Total 61.86 USD/month

As in the example above, the workload lifecycle of many databases will contain different child workloads with different requirements, such as:

Initial load of the database (for example, when you migrate to SQL database)

Normal use (for example, user activity in a single department that uses this application on normal working days)

Peak use (for example, once a month to process reports for all employees throughout the company)

Large extraction transformation and load (ETL or import/export) operations

Physical Database maintenance operations (such as creating indexes)

If you categorize workloads by different requirements, you will not have to run at the performance level required by peak load, which can greatly reduce costs.

Simply call the Update Database API that specifies the new service target, and your application code can change the performance level if necessary. This change is itself an asynchronous operation, and you can use the Get Database API to monitor change status. Your database will be available online throughout the change process.

Web-and enterprise-level contrast basic (preview), Standard-level (preview) and Advanced (preview)

One question we are often asked is how performance in the new service level compares to the WEB and enterprise levels. This question is difficult to answer directly. It's hard to be difficult. The new service level is fundamentally different from the Web version and the Enterprise Edition. In the Web-level and enterprise-level, the service pattern is based solely on the amount of storage used and does not take into account any other hardware resources. It is a fundamental flaw that the system is optimized for storage availability and does not take into account any other resources used to perform database workloads. Performance levels and resource availability in the WEB and enterprise levels depend on different factors, such as the workload of other customers on the same computer, and systems that protect themselves from overuse. Whether you can take advantage of these resources depends on luck, and this problem will become more and more prominent over time if you do not address it through a new service level.

As I said at the beginning of the article, customers have clearly told us that they don't want their performance experience to be unpredictable and uncertain. The new basic level (preview), standard level (preview) and Advanced (preview) focus on delivering high performance predictability at different price points. That is, system optimization is designed not only to provide storage, but also to accommodate other hardware resources required to perform database workloads.

The design principles used in these service levels are fundamentally different, so they are no more comparable than apples and oranges.

To help you understand the Web-level and enterprise-level resource usage, we provide a percentage of the resources used in sys.resource_stats (mentioned above). The reference points in the telemetry data that are calculated for the web-level and enterprise-level resource percentages are set to the standard (preview) S2 ½, since this is the general price level for the current web and enterprise databases. Note that this is only a reference point for telemetry data and does not represent the amount of resources available at the WEB level and at the enterprise level. As mentioned above, there is uncertainty about the amount of available resources in the Web and enterprise levels. But to know exactly what level of performance you need for your database, you should upgrade to one of these new service levels, and then run your workload and view sys.resource_stats. Existing restrictions for basic level (preview) and standard level (preview) services that are available only on new logical servers are released in the subsequent preview phase, where you will not have to import Web-level and enterprise-level databases to test new service levels.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.