Why use transactions?
A classic example is the bank transfer problem. When transferring money from one bank to another, this operation is completed in two steps. First, you must extract the money from one bank and then store the money in another bank. If, at this time, the money has been extracted from a bank, but the money is stored in another bank or before an exception occurs, if there is no transaction protection, the money in the source account will be reduced, however, the money in the target account does not increase.
Transaction Description: in a strict sense, a transaction is a collection of database operations of a logical unit. In general, transactions are a series of operations that can be completed in the form of atomic operations.
Four basic features of transactions:
Atomicity: All database operations in a transaction are an integral whole. These operations are either executed in full or all are invalid.
Consistency: The data is consistent before and after the transaction is processed.
Isolation: one transaction has no impact on another transaction.
Durability: Once a transaction completes all operations, its database operations are permanently reflected in the database.
Why should I introduce a lock?
Data inconsistency may occur when multiple users operate on the database at the same time.
Side effects of concurrency:
Loss of updates: Two or more users modify the database at the same time, and one of them destroys the modification results of the other.
Dirty read: a user process reads data modified by another user process but not officially submitted. This leads to different data situations.
Repeatable reading: one user reads the data, and the other reads and modifies the data. At this time, the first user reads the data before and after the data is found to be inconsistent.
PHANTOM: when one user reads some data and another user inserts some data, the new data will appear when the previous user views the data again.
Transaction isolation level
Isolation level dirty read unrepeatable read Phantom
Read uncommitted may be
Read committed is impossible
Repeatable read is impossible
Serializable cannot be impossible
Lock Classification
1. From the programmer's point of view, there are two types: Pessimistic lock and optimistic lock.
Pessimistic lock: it relies on databases to manage locks.
Optimistic lock: does not rely on the database transaction mechanism, is the application layer.
2. From the database perspective, there are two types of locks: exclusive locks, shared locks, and update locks.
Exclusive lock: exclusive lock (x) prevents concurrent transactions from accessing resources. Other transactions cannot read or modify the data locked by the lock (X.
Share lock: the share (s) Lock allows concurrent transactions to read (select) a resource. When a shared (s) lock exists on the resource, no other transactions can modify the data. Once the data has been read,
Share (s lock, unless the transaction isolation level is set to be repeated read or higher, or the share (s) Lock is retained with a lock prompt during the transaction lifecycle.
Update lock: Used in updatable resources. It prevents common deadlocks when multiple sessions are reading, locking, and subsequent resource updates.
Lock Granularity
The lock granularity refers to the size of the target to be blocked. If the lock granularity is small, the concurrency is high. If the lock granularity is large, the concurrency is low.
The lock granularity supported by SQL Server can be divided into rows, pages, keys, key ranges, indexes, tables, or databases to obtain locks.