Transaction strategy: High concurrency Policy-Learn how to implement transaction policies for applications that support high user concurrency

Source: Internet
Author: User

Introduction: The author of the Transaction Strategy series, Mark Richards, will discuss how to implement transaction policies for applications with high throughput and high user concurrency requirements in the Java™ platform. Understanding how to compromise will help you ensure high levels of data integrity and consistency, and reduce refactoring work in subsequent development processes.

The API layer and the client Orchestration Policy transaction strategy that I described in the previous articles in this series are core policies that apply to most standard business applications. They are simple, reliable, relatively easy to implement, and provide the highest level of data integrity and consistency. Sometimes, however, you may need to reduce the scope of transactions to gain throughput, improve performance, and increase database concurrency. How can you achieve these goals while still maintaining a high level of data integrity and consistency? The answer is to use the high concurrency transaction strategy.

The high concurrency policy originates from the API layer policy. Although the API layer strategy is robust and reliable, it has some drawbacks. Always starting transactions at the top level of the call stack (API layer) can sometimes be inefficient, especially for applications with high user throughput and high database concurrency requirements. Limiting specific business requirements, long periods of time consuming transactions and long locks consume too many resources.

Like the API layer policy, the high concurrency policy frees any transaction responsibilities on the client layer. However, it also means that you can invoke any particular logical unit of work (LUW) only once from the client layer. The high concurrency policy is designed to reduce the overall scope of transactions so that resources are locked for a shorter period of time, increasing application throughput, concurrency, and performance.

The benefits gained by using this policy are, to some extent, determined by the database that you are using and the configuration that it uses. Some databases (such as Oracle and MySQL using the InnoDB engine) do not retain read locks, and other databases (such as SQL Server without Snapshot isolation level) are the opposite. The more locks are retained, whether they are shared or private, the greater the impact on the concurrency, performance, and throughput of the database (and the application).

However, getting and retaining locks in the database is only one part of the high concurrency task. Concurrency and throughput also have to do with the time you release the lock. Regardless of the database you use, unnecessarily long time consuming transactions will preserve shared and private locks longer. In high concurrency, this may cause the database to raise the lock level from a low-level lock to a page-level lock, and in some extreme cases, switch from page-level locks to table-level locks. In most cases, you cannot control the heuristics that the data engine uses to choose when to upgrade the lock level. Some databases, such as SQL Server, allow you to disable page-level locks, in the hope that it does not switch from row-level locks to table-level locks. Sometimes, this gamble is useful, but in most cases you will not achieve the expected concurrency improvement.

The bottom line is that the longer a database is locked (shared or dedicated) in a scenario with high database concurrency, the more likely the following problems occur:

The database connection is exhausted, causing the application to be in a waiting state

Deadlocks caused by shared and private locks, resulting in poor performance and transaction failures

Upgrading from page-level locks to table-level locks

In other words, the longer the application is in the database, the less concurrency the application can handle. Any problems I list can cause your application to run slowly and will directly reduce overall throughput and performance-and the ability of your application to handle large concurrent user loads.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.