Advanced Knowledge Explained:
First, Index knowledge:
1. The concept of indexing: a data structure for quick find (sort).
2. mysql Innerdb engine tries btree tree to store index values.
3. Clustered index Concept: The primary key is generally clustered index, it is characterized by the table row data storage location order and index values stored in the same order.
Note: 1. The primary key is established by default clustered index 2.sqlserver It seems that you can also build your own clustered index, when the primary key index is not a clustered index.
4. Unique index: Is the column data is unique, such as the identity card number. The primary key index is also the only data by default.
5. Overriding the concept of an index (index overlay):
Overriding an index does not refer to an index type, but rather to the number and order of fields in the Select field that are created by the index. If this is the case, then the corresponding index will be used. And the values are obtained in the index without needing to be obtained from the data rows.
Ps:explain Select Appname,appprovider from Tb_app
You can see that possible_keys is null, but key is actually using the Idx_appname_appprovider index.
6. Note Points for indexing:
1. It is common to index fields in where, group by, order by, and table queries.
2. Non-primary key index lookup data is indirectly through the value of the primary key index lookup???
3. When a table query encounters a left and right join, it is generally indexed in the word list, such as a LEFT join B because the a table row is the return of all row data, and the principle of driving large tables based on a small table, so the index is to be indexed in table B.
4. It is generally a priority to create a composite index (an index consisting of multiple columns), because: 1. Composite indexes can save space 2. The more indexes on a table, the less efficient it is to update tables. 3. The overlay index may sometimes be used.
7. Index Invalidation issues:
1. The Where condition field must be queried in the same order as the indexed field to use to the index. such as the established idx_name_age (indexed by name and age), then the where field must have a name (first and must have to be able to use this index lookup order???
MySQL Practice: The Name field in the second or third position can be used to the index, why is it because the MySQL optimizer automatically resolves the correct use of the index, if the age field does not exist or will use this index, single-use to the index part of the index.
The Length field value is small. (the eldest brother cannot be broken, that is, the first field of a composite index must appear)
Second, the lock
1. Lock classification: Read, write lock two categories
1.1 Table Lock
Read table lock syntax: Lock table table name read. Features: Read lock is a shared lock, which means that different sessions can select the contents of the table, but the contents of the table cannot be updated, table lock is Myisim support. There is no transaction level to say.
Write table lock Syntax: Lock table table name write. Features: Write lock is exclusive lock, that is, as long as the table set a write lock, then the operation between sessions is exclusive. To set the write lock for table A, the operation of the two session on a cannot be performed concurrently (except for the select operation)
Unlock syntax: Unlock tables.
2. Size of Lock: Table lock, Row lock, MyISAM engine table lock and InnoDB row lock.
Table-level locks: low overhead, lock block, no deadlock, lock granularity, lock collision is the highest probability, concurrency is the lowest.
Row-level locks: high overhead, slow locking, deadlock, minimum lock granularity, the lowest probability of lock collisions, and highest concurrency.
Page Lock: A deadlock occurs between the overhead and lock bounds between a table lock and a row lock, and a locking granularity boundary between a table lock and a row lock, and concurrency is common.
2.1 Row Lock
In the operation of the row lock, the first to set the InnoDB engine auto-commit as 0 set autocommite=0, and then manually commit the update operation commit.
2.2 Row locks support transactions.
2.2.1 Transactions have four isolation levels: in order to effectively guarantee the correctness of concurrent read data, the transaction isolation level is proposed.
There are 4 isolation levels for database transactions, from low to high to read uncommitted (unauthorized read, read UNCOMMITTED), Read committed (authorization read, read commit), REPEATABLE read (repeatable read), Serializable (serialization), these four levels can solve the problems of dirty reading, non-repeatable reading, and Phantom reading one after the other.
2.2.2 Several problems caused by concurrent transactions: (table lock is not a transactional concurrency problem, because every time it is locked table, there is no dirty read or phantom read situation)
1, update lost
Two transactions update a row of data at the same time, and an update to the data by one transaction overwrites the update of the data by another transaction. This is because the system does not perform any lock operations, so concurrent transactions are not isolated.
2. Dirty Reading
One transaction reads the result of a data operation that is not committed by another transaction.
3. Non-repeatable read (non-repeatable Reads): One transaction repeats two times for the same row of data, but it gets different results.
The following conditions are included:
(1) Virtual read: After a transaction T1 read a data, the transaction T2 modifies it, and when the transaction T1 reads the data again, it gets a different value than the previous one.
(2) Phantom reading (Phantom Reads): The transaction is in the process of two queries, the results of the second query contains data that did not appear in the first query or missing data from the first query (this does not require two queries of the same SQL statement). This is due to the fact that another transaction was inserted into the data during the two queries.
3. View the status of the database lock:
Show Open tables: View database those tables are locked.
Three, slow query log
1. Common explain SQL to analyze SQL execution
2. Show profile (first to set the database to 1) to see the SQL execution cycle
Show Profiles;
Show profile Cpu,block io for query 1; View the SQL execution (Cpu,block IO CPU and I/O operations) that executed the query ID equals 1.
3. Turn on the slow query log:
1. Check if the slow query query is open: Show variables like '%slow_query_log% ';
2. Set global slow query log on: Set global slow_query_log=1;
3. Check the setup time for slow queries: show variables like '%long_time_log% '; Default is 10s
4. Set the threshold time for slow query: set global long_time_log=3; Sets the threshold time for slow queries to 3s
MySQL Advanced Learning Notes