ACID refers to the four characteristics that a transaction (transaction) should have in a reliable database management system (DBMS): atomicity (atomicity), consistency (consistency), isolation (isolation), Persistence (durability). This is a few of the features that a reliable database should have. The following features are explained individually.
Understanding of atomicity (atomicity)
Atomicity means that the execution of a transaction in a database is as an atom. That is, no more points, the entire statement is either executed or not executed.
In SQL Server, each individual statement can be thought of as being included in a transaction by default:
Therefore, each statement itself is atomic, or all executed, so all do not execute, there will be no intermediate state:
It says that every T-SQL statement can be seen as being wrapped in a transaction by default, and SQL Server is atomic to each individual statement, but the particle size is very small, if the user wants to define the size of the atom, You need to be included in a transaction to make up the user-defined atomic granularity:
For users, custom atomicity to be implemented with transactions is often business-related, such as bank transfers, minus 100 from a account, and 100 in B accounts, if the two statements do not guarantee atomicity, such as subtracting 100 from a account, and the server loses power, The B account did not increase by 100. While this may make the bank happy, you don't want it to be a developer. In the default transaction, the entire transaction is not rolled back, even if there is an error. Instead, the failed statement throws an exception, and the correct statement executes successfully. This will destroy the atomic nature. So SQL Server gives you some options to ensure the atomicity of the transaction.
SQL Server provides two broad classes of methods to guarantee the atomicity of custom transactions: 1. Set transaction must conform to atomicity with set XACT_ABORT on
Set the XACT_ABORT option to on to set all transactions as an atom. The following example inserts into the database using two statements, and you can see that the transaction has atomicity after the set XACT_ABORT on option is turned on:
2. Rollback According to user's settings (ROLLBACK)
This approach is more flexible, and developers can customize what happens to rollback, which is the way to judge using a try Catch statement and @ @ERROR.
Understanding Consistency (consistency)
Consistency, that is, the integrity constraints of the database are not compromised until the transaction is started and after the transaction ends.
Consistency is divided into two levels 1. Database mechanism level
The consistency at the database level is that data conforms to the constraints you set (UNIQUE constraints, foreign KEY constraints, check constraints, and so on) and trigger settings before and after a transaction is executed. This is guaranteed by SQL Server.
2. Operational level
For the business level, consistency is the consistency of the business. This business consistency needs to be guaranteed by the developer. Many business aspects of consistency can be guaranteed by moving to the database mechanism level. For example, a product has only two models, you can move to a check constraint so that a column must be able to store only these two models.
For example, you define a transaction in which the operation is a,b,c three steps, and the system's atomic performance guarantees that these three steps are either executed or not, and cannot be split. But it's possible that your three departments have corrupted data consistency, such as defining a 200-dollar account transfer from a account to B. If your transaction contains only a deduction from account A, and does not include a B account plus money. Then the system guarantees atomicity, can guarantee either from a account to be deducted from the success, or all unsuccessful, do not appear to buckle half of the exception. But there's no way to keep the B account up. So a account deduction, B account plus money, this is a continuous action, business logic to ensure consistency, so you must put two steps in a transaction, so as to ensure consistency. understanding of Isolation (isolation)
Isolation: Refers to the degree of interaction between outgoing transactions in a concurrent environment (i.e., the visibility of data between concurrent transactions), and each transaction has its own full data space when different transactions manipulate the same data at the same time. Modifications made by concurrent transactions must be isolated from modifications made by any other concurrent transaction. When a transaction views a data update, the state of the data is either the state before which it was modified by another transaction, or the state after which it was modified by another transaction, and the transaction does not see the data in the middle state.
In Windows, if multiple processes are not allowed to modify the same file, Windows guarantees the isolation of different processes in this way:
In SQL Server, database files are managed through SQL Server so that multiple processes can access the database at the same time:
Interaction between Transactions
SQL Server uses locking and blocking to ensure different levels of isolation between transactions.
Under normal circumstances, complete isolation is unrealistic, and complete isolation requires that the database perform only one transaction at a time (that is, the execution of a transaction is serialized), which can be imagined. To understand the security of isolation in SQL Server, you first need to understand how transactions interfere with each other. However, the data for database operations within the same transaction is transparent, that is, one database operation within the same transaction can read another uncommitted (commit) database operation.
The interaction between transactions is divided into several categories: dirty Read (Dirty read), non-repeatable read (non-repeatable reads), Phantom Read (Phantom reads) dirty Read
Dirty reading means that one transaction reads data that is not committed by another transaction, and this data is likely to be rolled back:
Here's an example:
Two transactions, transaction a inserts a piece of data, but does not commit, transaction B reads during this time, reads the uncommitted data of transaction A, causes dirty read
non-repeatable read (unrepeatable read)
Non-repeatable reading means that in database access, one transaction reads the same record at a time, but the data read two times is different, and we call it not repeatable reading. In other words, this transaction is modified by other transactions between two read times. that is, transaction a two read the data, the first time reading is 100, and then transaction B changed the value to 200, transaction a read again, the result found that the value changed, resulting in a transaction data confusion.
Phantom Reading (Phantom Read)
Phantom reads, similar to non repeatable reads, are also issues that are inconsistent in multiple readings in the same transaction. But the inconsistency of non repeat reading is because the dataset it is about to take is changed , and the data that the phantom reads is inconsistent but not the data that he wants to read, but its conditional dataset changes . For example: SELECT ID where name= "ppgogo*", first read 6 qualifying IDs, the second time, because transaction b the first affixed by the name of "DD" changed to "Ppgogo9", the result is taken out of 7 data, as if the illusion of the same.
inter-transaction impact solution-setting Isolation Level
To avoid the impact of the above transactions, different levels of isolation can be avoided by setting different levels. because high isolation levels mean more locks, sacrificing performance. So this option is open to the user to set up according to the specific requirements. However, the default isolation level, read commited, meets the actual requirements of 99%. Transaction isolation levels appear to allow you to strike a balance between performance and data availability, not to say that the higher the level the better, the best is the right one.
The impact of isolating transactions is achieved through locks, and this concept is more complex, so this article does not explain this concept in detail. Blocking to prevent the effect
SQL Server provides 5 options to avoid the impact of different levels of transactions
Isolation level from low to high 1) uncommitted read (read UNCOMMITTED) (highest performance, but may be dirty read, not repeatable read, Phantom Read): The SELECT statement is executed in a way that is not locked. 2 Submit Read (Read Committed) (may appear unreadable, Phantom Read): only read the submitted data. 3 Repeatable read (repeated read) (may appear): by adding a shared record lock implementation , that is, SELECT .... For UPDATE 4) serial Read (Serializable) (lowest performance, range lock will cause concurrent descent): Full serialization read, all SELECT statements are implicitly converted to select ... Lock in SHARE MODE, where reading uses table-level shared locks , both reads and writes are blocked. Highest isolation level.
5 Snopshot (This is done by creating an extra pair in tempdb to avoid dirty reads and not to read repeatedly, which creates an additional burden for tempdb because it is not a standard ANSI SQL standard and is not discussed in detail)
Isolation Level Comparison table:
all in all, different isolation levels are implemented by adding different locks to the block to see an example:
SQL Server blocks dirty reads by blocking, so maintaining independence is at the expense of performance:
Understanding Persistence (Durability)
Persistence means that changes made to the database by the firm after the transaction is completed are persisted in the database and are not rolled back.
When a transaction is committed, it is persisted in the database, even if there is any accident, such as a power outage.
SQL Server guarantees persistence by Write-ahead transaction log. Write-ahead transaction log means that changes to the database in a transaction are first written to the transaction log before being written to the database. The transaction log is automatic arranging in order (LSN). When a database crashes or a server breakpoint is started, restarting SQL Server,sql server first checks the log order number to make it persistent by persisting the part of the database that should have been changed instead of to the database.
Summary:
The (ACID) characteristics of a transaction are implemented by a relational database management system (RDBMS, database System). The database management system uses logs to ensure the atomicity, consistency, and durability of transactions. The log records the update that the transaction made to the database, and if a transaction occurs during execution, it can undo the update that the transaction has made to the database based on the log, and return the database to its initial state before the transaction is performed.
The database management system adopts the lock mechanism to realize the separation of the transaction. When more than one transaction updates the same data in the database, only the transaction that holds the lock can update the data, and other transactions must wait until the previous transaction releases the lock, and other transactions have the opportunity to update the data.