I. Four elements of a transaction
The four basic elements that database transactions perform correctly include atomicity (atomicity), consistency (consistency), isolation (isolation), persistence (durability), or acid. There are two main ways to achieve acid now: One is write ahead logging, which is the way of journaling (modern databases are all based on this), and the other is shadow paging.
- atomicity : All operations in the entire transaction, either complete or complete, are not likely to stall in the middle of the process. A transaction has an error during execution and is rolled back (Rollback) to the state before the transaction started
- Consistency : The system must always be in a consistent state before and after a transaction is executed
- isolation : Concurrent transactions do not interfere with each other and execute independently of each other
- Persistence : Changes made to the database by the transaction are persisted in the database after the transaction is completed
Two. MyISAM table lock
The basic unit of the last execution of all user behavior is a single operation command (SELECT, INSERT, UPDATE, delete, etc.), all of which are atomic operations. In terms of transactions, MyISAM tables always work efficiently in autocommit=1 mode, and atomic operations typically provide comparable integrity and better performance. This means that you can be sure that while each feature update is running, other users cannot interfere with it, and there is no automatic rollback (this can happen if you are not careful with transactional tables), and the MySQL server can guarantee that no dirty reads are present.
In general, all transaction-resolved integrity issues can be resolved with lock tables or atomic updates, ensuring that the server is not automatically interrupted, which is a common problem for transactional database systems. MyISAM only supports table-level locking, which allows for multiple or one-time write. Lock tables can lock a table for the current thread, and if the table is locked by another thread, it will clog until all locks can be obtained. UNLOCK tables can release any locks held by the current thread. When a thread publishes another lock tables, or when the connection to the server is closed, all tables locked by the current thread are implicitly unlocked. If you want to ensure that no other thread accesses the modified data between select and update in the same business flow, you must use lock TABLES. If you obtain a read local lock on a table (as opposed to a write lock), the table allows parallel insertions at the end of the table, allowing read operations when other clients perform the insert operation. The newly inserted record is not visible to the client with the read lock property until the lock is unlocked. With insert DELAYED, you can place the insertion item in the local queue until the lock is lifted, and the client waits for the insert to complete.
three. InnoDB transactions and locking
In InnoDB, all user behavior occurs within a transaction, and the transaction is the basic unit of its execution. If autocommit mode is allowed, that is, autocommit = 1, each SQL statement will run as a separate transaction, and if Autocommit mode is turned off with SET autocommit = 0, then we can assume that a user always has a transaction open, a SQL The commit or ROLLBACK statement ends the current transaction and a new transaction begins, and all two statements release all InnoDB locks that are set in the current transaction.
InnoDB supports finer-grained row-level locking in addition to table-level locking, and row and table locks can coexist in multiple granularity.
- Shared (S) Locks: A shared lock allows a transaction to acquire a lock to read one line (read). If the transaction T1 holds the S lock on row R, then another transaction T2 the lock requirement for row R is handled as follows, T2 's request for S Lock is immediately authorized, so T1 and T2 each hold a shared lock for R. But T2 's X lock request for R will not be immediately authorized, which seems to appear to have precedence over write so that the write request is starved
- Exclusive (X) Locks: An exclusive lock allows a transaction to acquire a lock to update or delete a row (WRITE). If the transaction T1 holds an X lock for R, then any lock type of T2 to R cannot be immediately authorized
- Intention Locks: Intent lock is a table lock in InnoDB, and he indicates that s or x locks will be used for a row in a transaction. Intention Shared (IS) indicates that the transaction T intends to set the S lock onto the table T,Intention Exclusive (IX) indicates that the transaction T intends to set the X lock on the line. The intent lock protocol is as follows:
1. Before a transaction acquires the S lock on a row of table T, he must obtain an IS lock or a stronger lock 2 of table T. Before a transaction gets the x lock on a row of a table T, he must obtain an IX lock of T 3. These rules can be summarized as the following lock type compatibility matrices: Xixsisxconflictconflictconflictconflictixconflictcompatibleconflictcompatiblesconflictconflictcompatiblecompatibleisconfl Ictcompatiblecompatiblecompatible
4. A lock can be authorized to request his transaction if it is compatible with an existing lock, but not if it is incompatible with an existing lock
A transaction must wait until a conflicting lock is released. If a lock request conflicts with an existing lock and cannot be authorized, it can cause a deadlock. Once a deadlock occurs, InnoDB chooses one of the errors and releases the lock it holds until the deadlock is unlocked. An intent lock does not block anything except a request for a full table (for example, lock TABLES ...). WRITE), the main purpose of the IX and is locks is to indicate that someone is or is ready to lock a row
Four. Row lock and table lock pros and cons comparison
advantages of row-level locking :
- There are only a few locking conflicts when accessing different rows in many threads
- There are only a few changes when rolling back
- A single row can be locked for a long time
disadvantages of row-level locking :
- Consumes more memory than page or table level locking
- When used in most of the table, it is slower than page-level or table-level locking because you have to get more locks
- If you frequently perform group by operations on most data or you must scan the entire table frequently, it is significantly slower than other locks
- With high-level locking, you can easily adjust your application by supporting different type locking, because its lock cost is less than row-level locking
MySQL table locking mechanism : When a lock is released, the lock can be obtained by the thread in the write lock queue, and then the thread in the lock queue is read. This means that if you have many updates on a table, the SELECT statement waits until there are no more updates. Table updates are generally considered more important than table retrievals, and therefore give them higher precedence, but this should ensure that the activity of updating a table cannot "starve" even if there is a heavy select activity on the table
The table locking method used for Write,mysql is as follows:
- If there is no lock on the table, put a write lock on it
- Otherwise, place the lock request in the Write lock queue
The table locking method used for Read,mysql is as follows:
- If there is no write lock on the table, put a read lock on it
- Otherwise, place the lock request in the Read lock queue
table locking is better than row-level locking in the following cases :
- Most of the statements in the table are used to read
- For strict unique_key to be read and updated, you can update or delete a row that can be extracted with a single read keyword:
UPDATE tbl_name SET column=value where unique_key_col=key_value;delete from Tbl_name where Unique_key_col=key_value;
SELECT combines parallel INSERT statements with only a few update or DELETE statements
There are many scan or group by operations on the entire table with no write operations
- For large tables, table locking is better than row locking for most applications, but partial defects exist
Table Locking considerations:
Use the Set Low_priority_updates=1 statement to specify that all updates in a specific connection should use a low priority;
Or use the Low_priority property to give a lower priority to a particular insert, UPDATE, or DELETE statement;
or using the High_priority attribute to give a higher priority to a particular SELECT statement;
Or specify a low value for the MAX_WRITE_LOCK_COUNT system variable to start mysqld to force MySQL to temporarily increase the priority of all SELECT statements waiting for a table after a specific number of inserts are complete;
Or start mysqld with--low-priority-updates, which will give all updates (modify) a table's statement a lower priority than the SELECT statement
If you have questions about insert combined with SELECT, switch to using the new MyISAM table because they support concurrent select and insert;
If you mix inserts and deletes on the same table, insert delayed will help a lot;
The limit option for delete can help if you are having problems with the same table mixed with SELECT and DELETE statements;
Using Sql_buffer_result with the SELECT statement can help to shorten the table lock time;
You can change the lock code in MYSYS/THR_LOCK.C to use a single queue, in which case the write lock and read lock will have the same priority;
If you do not mix updates with the option to check many rows in the same table, you can do parallel operations;
You can use the lock tables to increase speed because many updates in one lock are much faster than those that are not locked, and it can be helpful to cut the contents of the table into several tables.
Five. Select MyISAM
In general, using a row-level locked storage engine, you should look at what the application does and what kind of select and update to mix with. For example, most Web applications perform many select, rarely delete, update only the value of a key, and insert only a few rows. In MySQL for storage engines that use table-level locking, table locks are not deadlocked, which is managed by always requesting all necessary locks at the beginning of a query and always locking the table in the same order.
You can analyze table lock contention on the system by examining the table_locks_waited and table_locks_immediate state variables:
mysql> SHOW STATUS like ' table% '; +-----------------------+---------+| Variable_name | Value |+-----------------------+---------+| Table_locks_immediate | 1151552 | | table_locks_waited | 15324 |+-----------------------+---------+
If the data file does not contain empty blocks (rows that are deleted or updated from the middle of the table can cause voids), the INSERT statement does not conflict, you can freely mix parallel insert and SELECT statements for the MyISAM table without locking, and you can insert rows while other clients are reading the MyISAM table. Records are always inserted at the end of the data file, and if they cannot be inserted at the same time, in order to make multiple inserts and select operations in a table, you can insert rows in the staging table and immediately update the real table with the records in the staging table, which is also suitable for bulk-deferred insertions:
mysql> LOCK TABLES real_table Write, insert_table write; Mysql> INSERT into real_table SELECT * from insert_table; mysql> TRUNCATE TABLE insert_table; Mysql> UNLOCK TABLES;
Six. Select InnoDB
InnoDB use row locking, there may be deadlocks. This is because, during SQL statement processing, InnoDB automatically obtains row locks instead of when the transaction is started. For InnoDB and BDB (berkeleydb) tables, if you explicitly lock the table with lock TABLES, MySQL uses only table locking, it is recommended not to use lock TABLES, because InnoDB uses automatic row-level locking and BDB uses page-level locking to ensure transaction isolation.
INNODB supports foreign KEY constraints.
Generally speaking, it also comes with a small number of write preferred MyISAM, otherwise it would be better to choose InnoDB or other engines.
Seven. pessimistic lock and optimistic lock
Pessimistic locks and optimistic locks are not a standard concept in a database, but rather a popular statement.
- pessimistic lock : Pessimistic lock refers to the data has been accidentally modified conservative attitude, relying on the database natively supported lock mechanism to ensure the security of the current transaction processing, to prevent other concurrent transactions against the target data corruption or other concurrent transaction data, will be in the execution of the transaction before or after the application of the lock, Release the lock after execution is complete. This can seriously affect the concurrency of the system for long transactions
LOCK TABLES a write;insert into a values (1,23), (2,34), (4,33), INSERT into a values (8,26), (6,29); UNLOCK TABLES;
Locking a table accelerates insert operations with multiple statements because the index buffers are flushed to disk only once after all INSERT statements have completed. There are usually many insert statements that have a flush of index buffers, and if you can insert all rows with a single statement, you do not need to lock, and for the transaction table, begin and commit instead of lock tables to speed up the insertion
- optimistic Lock: Optimistic lock relative pessimistic lock, the first hypothetical data will not be modified by concurrent operation, there is no data conflict, only when the data to commit the update, the data will be formally conflicting or not detected, if a conflict is found, the declaration fails, otherwise update the data. This requires avoiding the use of long transactions and locking mechanisms, so as not to cause the system to reduce concurrent processing capacity, to ensure system productivity. The following describes the approximate business process flow when using optimistic locking
First step: Execute a query select Some_column as Old_value from some_table where id = id_value (assuming that the value is not modified by other concurrent transactions during the current business process)
... Step N: Old_value participates in intermediate business processing, such as Old_value being modified by itself New_value = f (old_value). This can take a long time, but will not be locked for the row or table where the Some_column is held, so other concurrent transactions may acquire the lock
... Tail Step: Perform condition update some_table set some_column = new_value where id = id_value and some_column = Old_value (check old_value in condition update Whether it has been modified)
Mysql Transaction and lock mechanism