Database transactions: A transaction is a logical unit of work in a database run and is handled by the transaction management subsystem in the DBMS.
The following sections describe the transfer from user fjdingsd:
Database Features (ACID) for database operations that support transactions:
⑴ atomicity (atomicity)atomicity means that all the operations contained in a transaction are either all successful or fail back, which is the same concept as the previous two blogs about transactions, so the operation of a transaction must be fully applied to the database if it succeeds, and it cannot have any effect on the database if the operation fails. ⑵ Consistency (consistency)consistency means that a transaction must transform a database from one consistent state to another, meaning that a transaction must be in a consistent state before and after execution.
Take the transfer, assume that user A and User B both the sum of money together is 5000, then no matter how a and B transfer, transfer several times, after the end of the transaction two users of the sum should be 5000, this is the consistency of the transaction.
⑶ Isolation (isolation)isolation is when multiple users concurrently access the database, such as when the same table operation, the database for each user-opened transactions, can not be disturbed by the operation of other transactions, multiple concurrent transactions to be isolated from each other.
to achieve this effect: for any two concurrent transactions T1 and T2, in the case of transaction T1, the T2 either ends before the T1 starts, or starts after the T1 ends, so that every transaction does not feel that another transaction is executing concurrently. The isolated database on transactions provides a variety of isolation levels, which are described later.
⑷ Persistence (durability)persistence refers to the fact that once a transaction is committed, changes to the data in the database are permanent, even if the database system encounters a failure, and the commit transaction is not lost.
For example, when we use the JDBC operation database, after committing the transaction method, prompt the user transaction operation completes, when our program execution completes until sees the prompt, can determine the transaction and the correct commit, even if this time the database has the problem, also must have our transaction complete execution, Doing so will cause us to see that the transaction is complete, but the database failed to perform a transaction because of a failure.
The above-mentioned four characteristics of the transaction (called acid), now focus on the isolation of the transaction, when multiple threads are open transaction operations database data, the database system to be able to isolate operations to ensure the accuracy of each thread to obtain data, in the introduction of the various isolation levels provided by the database, Let's take a look at some of the problems that can occur if you don't consider the isolation of a transaction:
-
First class missing update (update Lost): This update is missing because of rollback, so it is also called rollback loss. At this point two transactions update count at the same time, two transactions are read to 100, a transaction update succeeds and commits, count=100+1=101, transaction two fails for some reason, and then rolls back, transaction two restores count to the 100 that it started reading. At this point, the update for transaction one is lost.
-
Dirty Read (Dirty Read): This type of exception occurs because one transaction has read another transaction modified but uncommitted data. For example, transaction one updates the count=101, but does not commit, transaction two reads count at this time, the value is 101 instead of 100, then the transaction is rolled back for some reason, and then the second transaction reads the value that is the beginning of the nightmare.
-
Non-repeatable read (not repeatable Read): This exception is a transaction that performs two or more queries on the same row of data, but with different results, that is, in a transaction you cannot repeat (that is, multiple times) to read a row of data. If you do this, there is no guarantee that the results of each reading will be the same, and it may be different. The result is that there are other transactions between the two queries that update the row's data. For example, the transaction first queries the count, the value is 100, at this time the transaction two updates the count=101, the transaction one reads count again, the value becomes 101, two reads the result is different.
-
Second class missing update (Second update Lost): This update is missing because the update is overwritten by another transaction, or it can be called overwrite lost. For example, two transactions update count at the same time, both read 100 this initial value, the transaction first update successfully and commit, COUNT=100+1=101, transaction Two after the update succeeds and commits, count=100+1=101, because the transaction two count or starting from 100 increase, The update for transaction one is lost.
-
Phantom Read (Phantom Read): Phantom reads and non-repeatable reads are a bit like, but only for the number of data that is not the value of the data. This kind of exception is a transaction in the process of two times the number of data in a different way, let people think of the illusion, magic reading is probably how to get it. For example, transaction one queries how many records the order table has, transaction two adds a record, and the transaction checks how many records the order table has, and finds that it is not the same as the first time, which is the Phantom read.
To address these situations, the database provides a lock mechanism, as follows:
READ UNCOMMITTED: This isolation level means that even if an UPDATE statement for a transaction is not committed, but other transactions can read the change, several exceptions may occur. Very easy to make mistakes, no security, and basically will not be used.
Read Committed: This isolation level refers to a transaction that sees only committed updates for other transactions, does not see uncommitted updates, eliminates dirty reads, and first-class lost updates, which are the default isolation levels for most databases, such as Oracle,sqlserver.
REPEATABLE READ (Repeatable Read): This isolation level refers to a transaction two or more times the same for the data content of the query, the result is the same, but there is no guarantee that the number of data bar query is the same, as long as there is read the row data is forbidden to write, eliminating the non-repeatable read and the second kind of update loss , which is the default isolation level for MySQL databases.
Serialization (Serializable): This means that the transaction does not allow other transactions to execute concurrently. A fully serialized read, which is forbidden to write as long as it exists, but can be read at the same time, eliminating the illusion of reading. This is the highest level of transaction isolation, although the safest and most hassle-efficient, but inefficient, is generally not used.
The following are the control capabilities of various isolation levels for each exception:
level \ Exception | The
first type of update is missing |
Dirty Read |
non-repeatable read |
category two missing updates |
Phantom Reading |
Read not submitted |
Y |
Y |
Y |
Y |
Y |
Read Committed |
N |
N |
Y |
Y |
Y |
REPEATABLE READ |
N |
N |
N |
N |
Y |
Serialization |
N |
N |
N |
N |
N |
Database Lock Classification
Generally can be divided into two categories, one is pessimistic lock, one is optimistic lock, pessimistic lock is generally what we usually say the database lock mechanism, optimistic lock is generally referred to the user's own implementation of a lock mechanism, such as hibernate to achieve the optimistic lock even programming language also has the idea of optimistic lock application. Pessimistic lock: As the name implies, is very pessimistic, it is a conservative attitude to the data by the external modification, that the data will be modified at any time, so the entire data processing needs to be locked. Pessimistic locks are generally dependent on the lock mechanism provided by the relational database, in fact, the row locks in the relational database, table locks, whether read-write locks are pessimistic locks.
Pessimistic locks are divided by the nature of use:
Shared Lock (Share locks précis-writers for S lock): Also called read lock, transaction a pairs object T plus s lock, other transactions can only be on T plus s, multiple transactions may be read at the same time, but cannot have write operations until a releases S lock.
Exclusive Lock (Exclusivelocks précis-writers for x Lock): Also called write lock, after transaction A to object T plus x lock, other transaction cannot add any lock to T, only transaction A can read and write Object T until a release x lock.
Update lock (précis-writers is a U lock): Used to book an X lock on this object, it allows other transactions to read, but does not allow U lock or x lock, and when the object being read is to be updated, it is promoted to X lock, which is mainly used to prevent deadlocks. Because a shared lock is used, the operation to modify the data is divided into two steps, first obtaining a shared lock, reading the data, and then upgrading the shared lock to an exclusive lock before performing the modify operation. This way, if two or more transactions simultaneously request a shared lock on an object, these transactions will be promoted to exclusive locks when the data is modified. These transactions do not release the shared lock but wait for the other party to release, which creates a deadlock. If a data is directly requested to update the lock before modification, it can avoid deadlock if it is upgraded to exclusive lock when the data is modified.
Pessimistic locks are divided by the scope of the scope:
- Row locks: The scope of the lock is the row level, and the database is able to determine which rows need to be locked using row locks, and if you do not know which rows are affected, the table locks are used. For example, a user table username with a primary key ID and a user's birthday birthday when you use the update ... where id=? Such a statement database knows exactly which row to affect, and it uses row locks when you use the update ... where birthday=? A table lock may be used when such a statement is not known beforehand because it does not know which rows are affected.
- Table Lock: The scope of the lock is the entire table.
Optimistic lock: As the name implies, is very optimistic, every time when their own operation of the data that no one back to modify it, so do not add locks, but in the update will be judged during this time the data has been modified, users need to implement their own. Now that there are pessimistic locks available from the database, it's easy to use. Why use optimistic locking? For reads much more than write operations, most are read, at this time an update operation lock will block all reads, reducing the throughput. Finally release the lock, the lock is a bit of overhead, we just have to find a way to solve a very small number of update operation synchronization problem. In other words, if the read-write ratio gap is not very large, or if your system is not responding less, throughput bottlenecks, then do not use optimistic locking, it adds complexity, but also brings additional risk.
Optimistic lock implementation method:
- Version number (as versioned): is to add a version of the data ID, in the database is the table to add a version field, each update to add this field 1, read the data when the version is read, updated when the version, If the version of the start read can be updated, if the current version is larger than the old version, indicating that there are other transactions update the data, and increase the version number, this time to get a notification can not be updated, the user based on this notification to decide what to do, for example, to start over again. The key here is to judge the version and update two actions need to be executed as an atomic unit, or else before you can update the official update, there are other transactions modified version, this time you go to update may overwrite the previous transaction to do the update, resulting in the second category of missing updates, So you can use the update ... and version= "old version" statement, depending on whether the return result is 0 or non-0 is notified, if the update is not successful because version was changed, if return non-0 indicates that the update was successful.
- Timestamp (timestamp): And the version number is basically the same, only by the time stamp to judge only, note the timestamp to use the database server timestamp cannot be the time of the business system.
- field to update: similar to the version number, just do not add extra fields and use valid data fields for versioning information, because sometimes we may not be able to change the database table structure of the old system. Suppose there is a field to be updated called count, first to read this count, update the time to compare the value of count in the database is not my expected value (that is, the value to start reading), if I modified the value of count to update to the field, otherwise the update failed. Java's basic type of atomic type object such as Atomicinteger is the idea.
All fields: Similar to the field to be updated, except that all fields are used for versioning information, and only all fields do not change to perform the update.
The difference between optimistic locks in several ways:The new system design can use version and timestamp method, need to increase the field, the application scope is the entire data, regardless of that field modification will update version, that is two transactions update the same record two unrelated fields are mutually exclusive, can not be synchronized. The old system can not modify the database table structure when using data fields as version control information, do not need new fields, to update the field as long as the other transaction modified fields and the current transaction modified fields do not overlap can be synchronized, more concurrency.
Now look at the four isolation levels that the MySQL database provides to us:
①serializable (serialization): Can avoid dirty reading, non-repeatable reading, the occurrence of phantom reading.
②repeatable Read (Repeatable Read): Can avoid dirty read, non-repeatable read occurrence.
③read committed (Read Committed): Can avoid the occurrence of dirty reading.
④read UNCOMMITTED (READ UNCOMMITTED): lowest level, no case is guaranteed.
The top four isolation levels are the serializable level, the lowest is the read uncommitted level, and the higher the level, the lower the efficiency of execution. A level like serializable is a lock table (similar to a lock in Java Multi-threading) so that other threads can only wait outside the lock, so what isolation level to choose should be based on the actual situation. The default isolation level in the MySQL database is repeatable read (repeatable read).
In the MySQL database, the above four isolation levels are supported, the default is repeatable read (repeatable read), and in the Oracle database, only the serializable (serialization) level and Read Committed (Read Committed) levels are supported. The default is the Read committed level.
View the isolation level of the current transaction in the MySQL database:
SELECT @ @tx_isolation;
Set the isolation level of the transaction in the MySQL database:
Set [Glogal | session] TRANSACTION ISOLATION level isolation class name; Set tx_isolation= ' isolation level name; '
Example 1: View the isolation level of the current transaction:
Example 2: Set the isolation level of a transaction to the READ UNCOMMITTED level:
Or:
Remember: Setting the isolation level of the database must be before opening the transaction!
If you are using JDBC to set the isolation level for a transaction on a database, it should be before you call the Setautocommit (false) method of the Connection object. You can set the isolation level of the current link by calling the settransactionisolation of the Connection object, and the field of the connection object, as with the parameter level:
To set the isolation level part of the code in JDBC:
PostScript: The isolation level setting is only valid for the current link. For a MySQL command window, a window is equivalent to a link, and the isolation level set by the current window is only valid for transactions in the current window; for the JDBC operations database, a Connection object is equivalent to a link, The isolation level set for the connection object is only valid for the connection object, regardless of the other linked connection objects.
Transaction of the database