Database related 1. InnoDB's Log
InnoDB has a lot of logs, and there are 2 concepts in the log that need to be clearly divided, logical logs and physical logs.
1.1 Logical logs
The information log about the operation becomes a logical log.
For example, to insert a piece of data, the undo logical log is formatted as follows:
<Ti,Qj,delete,U> Ti indicates that the transaction id,u represents the undo information, and Qj represents a unique identifier for the operation
The Undo log always does this:
1). Insert operation, a delete logical log is logged.
2). Delete operation, an insert logical log is logged.
3). Update operation, record the reverse update, change the line before the change back.
1.2 Physical logs
The information log for the new value and the old value is called the physical log. <Ti,Qj,V> Physical Logs
Binlog (binary log) is a typical logical log, and the transaction log (redo log) records the physical log, what is the difference between them?
- Redo log is generated at the storage engine layer, Binlog is a logical log on top of the database, and any storage engine generates BINLOG.
- Binlog Records SQL statements, and Redo logs record changes to each page.
- The write time is not the same. The Binlog is written once after the transaction commits, and redo log is continuously written during the transaction.
- Redo Log is an idempotent operation (executes multiple times equals execution once, redo log records <T0,A,950> records new values, executes the same number of times), Binlog not the same;
1.3 Log types
Error log: Logs an error message, and also logs some warning messages or correct information.
Query log: Records all information about database requests, whether or not they are executed correctly.
Slow query log: Sets a threshold that records all SQL statements that run longer than this value to the log file for slow queries. Binary log: Records all operations that make changes to the database.
Relay logs, transaction logs, and so on.
1.4 Summary
1, redo log (transaction log) guarantees the atomicity and persistence of transactions (physical log)
2, undo log guarantees transactional consistency, and InnoDB's MVCC (multiple version concurrency control) is also implemented with undo log (logical log).
3, redo log with checkpoint for efficient recovery of data.
4, the physical log records the details of the modified page, and the logical log records the action statement. Physical log recovery is faster than logical logs.
2. How transactions are implemented
The role of a transaction: A transaction transforms a database from a consistent state into another consistent state.
The mechanism of a transaction is generally summarized as the "ACID" principle i.e. atomicity (A), consistency (C), Isolation (I), and Persistence (D).
- Atomicity: All operations that make up a transaction must be a logical unit, either fully executed or not executed at all.
- Consistency: The database must be stable before and after the execution of the transaction.
- Isolation: Transactions do not affect each other.
- Persistence: The transaction must be fully written to disk after execution succeeds.
2.1 The isolation of a transaction is implemented by the lock of the storage engine
Database transactions can cause problems such as dirty reads, non-repeatable reads, and Phantom reads.
1) Dirty read: The transaction has not yet been submitted and his modifications have been seen by other transactions.
2) Non-repeatable READ: The contents of two identical SQL reads in the same transaction may be different. Two reads commit changes between other transactions may result in inconsistent read data.
3) Phantom data: The same transaction suddenly discovers data that he has not found before. and non-repeatable reads are similar, but the modified data is changed to add data.
InnoDB provides four different levels of mechanisms to ensure data isolation.
Unlike MyISAM, which uses table-level locks, InnoDB uses finer-grained row-level locks to improve data table performance. InnoDB locks are implemented by locking the index, which locks the primary key if there is a primary key in the query condition, and then locks the corresponding primary key (possibly causing a deadlock) if there is an index, and then locks the entire data table if none of the indexes are available.
4 Levels of Isolation:
1) READ UNCOMMITTED (uncommitted read)
Changes in the transaction, even if they are not committed, are visible to other transactions. Dirty Reads (Dirty read).
2) Read COMMITTED (Submit read)
At the beginning of a transaction, you can only "see" the changes that have been made to the firm that has been submitted. This level is sometimes called non-repeatable read (Nonrepeatable read).
3) Repeatable READ (Repeatable Read)
This level guarantees consistent results for the same records that are read more than once in the same transaction. In theory, however, the transaction level is still unable to solve the problem of another phantom read (Phantom read).
Phantom read: When a transaction reads a range of records, another transaction inserts a new record within that range. When the previous transaction reads the range again, a magic line is generated. (Phantom Row).
The problem with Phantom reads should be resolved by a higher isolation level, but MySQL is not the same as other databases, and it solves this problem at a repeatable read isolation level.
The repeatable read isolation level of MySQL addresses the 2 issues of "non-repeatable read" and "Phantom Read."
Oracle databases may need to be under the "SERIALIZABLE" transaction Isolation level to resolve phantom read issues.
MySQL default isolation level is also: REPEATABLE READ (REPEATABLE Read)
4) SERIALIZABLE (serializable)
Forced transaction serial execution, avoids the above mentioned dirty reading, cannot repeat reads, the Phantom reads three questions.
2.2 Implementation of atomicity and persistence
Redo log, called the redo logs (also called transaction logs), is used to guarantee the atomicity and durability of transactions.
Redo resumes the page operation that commits the transaction modification, Redo is the physical log, the page is physically modified.
When committing a transaction, it actually did the following 2 things:
One: The InnoDB storage engine writes transactions to the log buffer (log buffer), which flushes transactions to the transaction log.
Two: The InnoDB storage engine writes transactions to the buffer pool.
Here is a question, the transaction log is also write disk log, why do not need dual write technology?
Because the size of the transaction log block and the size of the disk sector are 512 bytes, the write to the transaction log guarantees atomicity and does not require doublewrite technology
Redo log buffers are made up of each log block that is 512 bytes in size. The log block is divided into three parts: the log header (12 bytes), the log content (492 bytes), and the tail of the log (8 bytes).
2.3 Implementation of Conformance
Undo log is used to ensure transactional consistency. Undo rolls back rows to a specific version, and undo is a logical log that is recorded according to each row of records.
Undo is stored in the undo segment inside the database, and the Undo field is in the shared table space.
Undo only restores the database logic to its original appearance.
Undo Log In addition to rollback, Undo implements MVCC (multi-version concurrency control), when reading a row of records, the discovery of a transaction lock, restore to the previous version by Undo, to achieve non-lock read.
myisam引擎不支持事务, innodb和BDB引擎支持
3. What is the index for?
Role: An index is an on-disk structure associated with a table or view that can speed up the retrieval of rows from a table or view. An index contains keys that are generated by one or more columns in a table or view. These keys are stored in a structure (b-tree), which enables the database to quickly and efficiently find rows associated with key values.
Well-designed indexes can reduce disk I/O operations and consume less system resources, which can improve query performance.
In general, indexes should be created on these columns, for example:
You can speed up your search on columns that you often need to search for;
On the column that is the primary key, enforce the uniqueness of the column and the arrangement of the data in the organization table;
These columns are mostly foreign keys and can speed up the connection in the columns that are often used in the connection;
Create an index on a column that often needs to be searched by scope, because the index is sorted and its specified range is continuous;
Create indexes on columns that often need to be sorted, because the indexes are sorted so that the query can use the sorting of the indexes to speed up the sorting query time;
It is often used to create an index above the column in the WHERE clause to speed up the judgment of the condition.
Disadvantages of the index:
First, it takes time to create indexes and maintain indexes, and this time increases as the amount of data increases.
Second, the index needs to occupy the physical space, in addition to the data table to occupy the data space, each index also occupies a certain amount of physical space, if you want to establish a clustered index, then the space will be larger.
Thirdly, when the data in the table is added, deleted and modified, the index should be maintained dynamically, thus reducing the maintenance speed of the data.
4. Database Optimization Related
Temporary tables are created in the following situations (temporary tables consume performance):
1. If the Group By column does not have an index, an internal temporary table will be generated.
2. If the order by is different from group by, or if the order by is in the form of multiple tables, the group by contains a column that is not the column of the first table, and a temporary table is generated
3. Use of distinct with order by may result in temporary tables
4, if using Sql_small_result,mysql will use the memory temporary table, unless there are some in the query must be set up temporary table on disk.
5. Temporary tables are used when the Union merges queries
6. Some views use temporary tables, such as temptable, or a view that uses union or aggregate queries to determine whether a query requires a temporary table, you can use the explain query plan, and view the extra column to see if there is a using temporary.
Build table: The split of the table structure, such as the core fields are used int,char,enum, such as fixed-length structure
A non-core field, or a text, an extra-long varchar, is split to place a single table.
Indexed: A reasonable index can reduce internal temporary tables
Write statements: Unreasonable statements will result in large amounts of data transfer and the use of internal temporary tables
Table optimization and column type selection
Table optimization:
1: Fixed length and variable length separation
such as ID int, which accounts for 4 bytes, char (4) is 4 characters long, and is a fixed length, time
That is, the bytes per cell value are fixed.
Core and characters commonly used paragraph, should be built fixed length, put in a table.
In the case of varchar, TEXT,BLOB, this variable-length field is suitable for placing a single table with a primary key associated with the core table.
2: Characters commonly used segment and non-characters commonly used segment to be separated.
Need to combine website specific business to analyze, analysis field query scene, query frequency Low field, single split out.
3: Add redundant fields reasonably.
-
Column Selection principle:
1: Field type precedence integer > Date,time > Enum,char > VarChar > BLOB
Column features Analysis:
Integer: Fixed length, no country/region, There is no difference in the character set
time is fixed long, the operation is fast, saving space. Consider time zones, where > ' 2005-10-12 ' is inconvenient when writing SQL;
Enum: To be able to constrain the value of the purpose, the internal use of integral type to store, but with the char, the internal to go through the string and value conversion Char fixed length, consider the character set and (sort) proofing set varchar, indefinite length to consider the conversion of the character set and sorting when the proofing set, Slow. A length identifier is added compared to char, and is processed more than once. Text/blob cannot use the memory temp table
Attached: selection of Date/time, explicit comments http://www.xaprb.com/blog/2014/01/30/timestamps-in-mysql/
2: Suffice it, do not be generous (e.g. Smallint,varchar (N))
Cause: Large field wastes memory, affects speed
take age as an example tinyint unsigned not NULL, can be stored 255 years old, enough. Waste 3 bytes with int The contents of varchar (300) are the same, but when the table is checked, varchar (300) spends more memory
3: Try to avoid null ()
Cause: null is not conducive to the index, to be labeled with special bytes. More than one byte per line the space occupied on disk is actually larger.
Description of the enum column
The 1:enum column is internally stored with an integer type
2:enum column associated with the enum column is the fastest
3:enum columns than (VAR) Char's weak---is converted when it encounters a char association. It takes time.
4: The advantage is that when char is very long, the enum remains an integral fixed length. The greater the amount of data queried, the more obvious the advantages of the enum. The
5:enum is associated with Char/varchar because it is slower to convert than Enum->enum,char->char, but sometimes it is-----to save IO when the amount of data is particularly large.
SQL statement Optimization
1) Try to avoid using the! = or <> operator in the WHERE clause, or discard the engine for a full table scan using the index.
2) Try to avoid null values for the field in the WHERE clause, otherwise it will cause the engine to discard full table scans using the index, such as:
Select ID from t where num is null
You can set the default value of 0 on NUM, make sure that the NUM column in the table does not have a null value, and then query:
Select ID from t where num=0
3) It's a good choice to use exists instead of in.
4) Replace the HAVING clause with a WHERE clause because the having will only filter the result set after retrieving all records
- Explain out of the meaning of the various item;
Select_type
Represents the type of each SELECT clause in a query
Type
Indicates how MySQL finds the desired row in the table, also known as the "access type"
Possible_keys
Indicates which index MySQL can use to find rows in the table, and if there are indexes on the fields involved in the query, the index will be listed but not necessarily used by the query.
Key
Displays the index that MySQL actually uses in the query, and displays NULL if no index is used
Key_len
Represents the number of bytes used in the index, which evaluates the length of the index used in the query
Ref
Represents the connection matching criteria for the above table, that is, which columns or constants are used to find the value on the index column
Extra
Contains additional information that is not appropriate for display in other columns but is important
- The significance of profile and the use of the scene;
How long does it take to query to SQL, and see how much cpu/memory is used, how much time is systemlock in the execution, how long is the Table lock, etc.
Index optimization Strategy
- 1 Index Type
1.1 B-tree Index
Note: The name Btree index, the big aspect, all use the balance tree, but the concrete implementation, each engine slightly different, for example, strictly speaking, the NDB engine, uses the T-TREE,MYISAM,INNODB, the default uses the B-tree index, but abstracts---b-tree system, Can be understood as "ordered quick find structure".
1.2 Hash Index
In the memory table, the default is the hash index, hash of the theoretical query Time Complexity O (1)
Question: Since hash lookup is so efficient, why not use a hash index?
For:
1) The result of the hash function calculation is random, if the data is placed on the disk, than the primary key is an ID, then as the ID is increased, the ID corresponding to the row, on the disk randomly placed.
2) Illegal to optimize the scope of the query.
3) The prefix index cannot be exploited. For example, in Btree, the Value of field column "Hellopworld", and indexed query Xx=helloword, natural can use the index, Xx=hello, can also take advantage of the index. (left prefix index) because the hash (' Helloword '), and hash (' Hello '), the relationship between the two is still random
4) Sorting is also not optimized.
5) must be returned to the line. That is, the index to the data location, you must return to the table to fetch data
- 2 common pitfalls of btree indexing
2.1 Index on the columns commonly used in the Where condition
Example: where cat_id=3 and price>100; Query the 3rd column, more than 100 yuan of goods
Error: cat_id on, and, price are indexed.
Error: Only use the cat_id or price index, because it is an independent index and only uses 1.
2.2 After indexing on multiple columns, which column is queried, the index will function
Error: On multi-column indexes, indexes work and need to meet the left prefix requirements.
- The index to which the query statement works after indexing on multiple columns:
For ease of understanding, assume that the ABC each 10 meters long plank, the river width 30 meters.
The full-value index is the length of the plank 10 meters,
Like, left prefix and range query, the plank length is 6 meters,
Splicing yourself, can cross across the river, know whether the index can be used.
As in the example above, where A=3 and b>10, and c=7,
A board length of 10 meters, a column index to play a role
A board normally connected B board, b Board index play a role
b Board is short, cannot get C board, C column index does not play a role.
Examples of index applications:
The row data is stored directly on the InnoDB primary index file, called the clustered index, and the secondary index points to a reference to the primary key
In MyISAM, both the primary and secondary indexes point to the physical row (disk location).
Note: for InnoDB,
1: The primary key index stores both the index value and the row's data in the leaf
2: If there is no primary key, it will be unique key
3: If there is no unique, the system generates an internal ROWID key.
4: In the index structure of the primary key, like InnoDB, both the primary key value is stored and the row data is stored, which is called "Clustered index".
- Clustered index
Advantage: There is less time to query entries based on primary key, no back row (data is under primary key node)
Disadvantage: If you encounter irregular data insertions, it causes frequent page splits.
The primary key value of a clustered index should be as high as possible, rather than a random value (not a random string or UUID), which would cause a large number of page splits and page movements.
- High performance indexing strategy
For InnoDB, node splitting will be slower because there are data files under the node.
For the primary key of InnoDB, try to use integral type, and increment integer type.
If the data is irregular, it will result in the splitting of the page, affecting the speed.
- Index overrides:
Index overrides mean that if the column of the query is exactly part of the index, then the query needs to be done only on the index file, without having to go back to the disk and find the data. This query is very fast, called "Index overlay"
- The ideal Index
1: Query frequent 2: high degree of sensitivity 3: Small length 4: As far as possible to cover the common query fields.
Note:
Index length directly affects the size of the index file, which affects the speed of adding and removing changes, and indirectly affects the query speed (memory consumption). For the values in the column, from left to right intercept the part, to build the index
1: The shorter the cut, the higher the repetition, the smaller the sensitivity, the less the index effect
2: The longer the truncation, the lower the repeatability, the higher the degree of differentiation, the better the index effect, but also the greater the impact-adding and deleting changes slow, and indirectly affect the query speed.
So we have to strike a balance between the difference and the length of the two.
Idiom: Intercept different lengths and test their sensitivity,
Select COUNT (distinct left (word,6))/count (*) from dict;
For general system applications: the degree of differentiation can reach 0.1, the performance of the index is acceptable.
For columns that are not easily distinguishable from the left prefix, the technique of indexing: such as URL columns
Http://www.baidu.com
http://www.zixue.it
The first 11 characters of the column are the same, not easy to distinguish, you can use the following 2 ways to solve
1: Store column contents upside down and Index
moc.udiab.www//:p TTH
Ti.euxiz.www//://ptth
So the left prefix is very distinguished,
2: Pseudo-Hash index effect
Simultaneous storage of Url_hash columns
Multi-column index multi-column index considerations---column query frequency, column sensitivity.
- Indexing and sorting
There are 2 possible situations in which sorting can occur:
1: In order to overwrite the index, directly on the index query, there is a sequential, using index
2: First take out data, form temporary table do filesort (file sort, but file may be on disk, also may in memory)
Our goal-----to get the data out of itself is orderly! Use the index to sort.
- Duplicate index and redundant index
Duplicate index: Refers to the same 1 columns (such as age), or the same sequence of several columns (Age,school), the establishment of multiple indexes, called duplicate indexes, there is no help to duplicate index, only increase the index file, slow update speed, remove.
Redundant index: Refers to the overlapping of columns covered by 2 indexes, called redundant indexes
such as x,m, columns, indexed index x (x), index XM (x,m)
X,XM index, the x column of the two overlaps, which is called a redundant index.
You can even set index MX (m,x) indexes, MX, and XM are not duplicates, because the order of the columns is different.
- Index Fragmentation and maintenance
During long-term data changes, index files and data files will create voids and create fragmentation.
We can modify the table through a NOP operation (which does not result in an action that affects the actual data).
For example: The engine of the table is InnoDB, can alter TABLE XXX engine InnoDB
The Optimize table name can also be repaired.
Note: To fix the data and index fragmentation of the table, all the data files will be re-organized and aligned.
This process, if the table has a large number of rows, is also very resource-intensive operation. Therefore, cannot be repaired frequently.
If the update operation of the table is frequent, it can be repaired by week/month.
If it is not frequent, it can be repaired for a longer period of time.
Database related questions 1. The difference between Drop,delete and truncate
Drop the table truncate delete the data in the table, then insert the self-growth ID from 1 and delete the data in the table, you can add the WHERE clause.
(1) The DELETE statement performs the deletion by deleting one row from the table at a time and saving the row's delete operation as a transaction record in the log for the rollback operation. TRUNCATE table deletes all the data from the table at once and does not record the individual deletion records in the log, and deleting rows is not recoverable. Delete triggers related to the table are not activated during the removal process. Execution speed is fast.
(2) The space occupied by the table and index. When the table is truncate, the space occupied by the table and index is restored to its original size, and the delete operation does not reduce the space occupied by the table or index. The drop statement frees all the space occupied by the table.
(3) Generally, drop > truncate > Delete
(4) Scope of application. TRUNCATE can only be table;delete to a table and view
(5) TRUNCATE and delete delete data only, and drop deletes the entire table (structure and data).
(6) Truncate and without where Delete: delete data only, without deleting the structure of the table (definition) The DROP statement will delete the structure of the table that is dependent on the constraint (constrain), the trigger (trigger) index (index), and the stored procedure that depends on the table/ The function will be preserved, but its state will change to: invalid.
(7) The DELETE statement is DML (data maintain Language), which is placed in rollback segment and is not valid until the transaction is committed. If there is a corresponding Tigger, the execution time will be triggered.
(8) Truncate, drop is a DLL (data define language), the operation takes effect immediately, the original data is not placed in the rollback segment, can not be rolled back
(9) In the absence of backup, use the drop and truncate sparingly. To delete some data rows, use Delete and pay attention to where to constrain the extent of the impact. The rollback segment should be large enough. To delete a table with drop, if you want to preserve the table and delete the data in the table, you can do it with truncate if the transaction is irrelevant. If it is related to a transaction, or if the teacher wants to trigger trigger, use Delete.
Truncate table name is fast and efficient because:
TRUNCATE TABLE is functionally the same as a DELETE statement without a WHERE clause: Both delete all rows in the table. However, TRUNCATE TABLE is faster than DELETE and uses less system and transaction log resources. The DELETE statement deletes one row at a time and records an entry in the transaction log for each row that is deleted. TRUNCATE table deletes data by releasing the data page used to store the table data, and records the release of the page only in the transaction log.
TRUNCATE table deletes all rows in the table, but the table structure and its columns, constraints, indexes, and so on, remain unchanged. The count value used for the new row identity is reset to the seed of the column. If you want to preserve the identity count value, use DELETE instead. If you want to delete the table definition and its data, use the DROP table statement.
(12) For a table referenced by the FOREIGN KEY constraint, you cannot use TRUNCATE table and you should use a DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not recorded in the log, it cannot activate the trigger.
2. Database paradigm
1 First paradigm (1NF)
In any relational database, the first paradigm (1NF) is the basic requirement for relational schemas, and a database that does not meet the first normal form (1NF) is not a relational database.
The so-called First paradigm (1NF) refers to the fact that each column of a database table is an indivisible basic data item and cannot have multiple values in the same column, that is, an attribute in an entity cannot have multiple values or cannot have duplicate properties. If duplicate attributes are present, you may need to define a new entity, which is composed of duplicate attributes, and a one-to-many relationship between the new entity and the original entity. In the first normal form (1NF), each row of a table contains only one instance of information. In short, the first paradigm is a column with no duplicates.
2 second paradigm (2NF)
The second paradigm (2NF) is established on the basis of the first paradigm (1NF), i.e. satisfying the second normal form (2NF) must first satisfy the first paradigm (1NF). The second normal form (2NF) requires that each instance or row in a database table must be divided by a unique region. It is often necessary to add a column to the table to store the unique identity of each instance. This unique attribute column is called the primary key or primary key, and the main code. The second normal form (2NF) requires that the attributes of an entity depend entirely on the primary key. The so-called full dependency is the inability to have a property that depends only on the primary key, and if so, this part of the property and the primary key should be separated to form a new entity, and the new entity is a one-to-many relationship with the original entity. It is often necessary to add a column to the table to store the unique identity of each instance. In short, the second paradigm is that a non-principal attribute is dependent on the primary key.
3 Third paradigm (3NF)
Satisfying the third normal form (3NF) must first satisfy the second normal form (2NF). In short, the third paradigm (3NF) requires that a database table not contain non-primary key information already contained in other tables. For example, there is a departmental information table, where each department has a department number (dept_id), a department name, a department profile, and so on. After the department number is listed in the Employee Information table, the department name, department profile and other department related information can no longer be added to the Employee Information table. If there is no departmental information table, it should be built according to the third paradigm (3NF), otherwise there will be a lot of data redundancy. In short, the third paradigm is that properties do not depend on other non-principal properties. (My understanding is to eliminate redundancy)
3.MySQL replication principle and process
Basic principle flow, 3 threads and the association between;
1. Main: Binlog thread-Record all the statements that changed the database data and put it into the binlog on master;
2. From: IO thread-After using start slave, is responsible for pulling the binlog content from master and putting it into its relay log;
3. From: SQL execution thread-executes the statement in relay log;
The difference between MyISAM and InnoDB in 4.MySQL, at least 5 points
1>. InnoDB supports things, and MyISAM doesn't support things.
2>. InnoDB supports row-level locks, while myisam supports table-level locks
3>. InnoDB supports MVCC, and MyISAM does not support
4>. InnoDB supports foreign keys, while MyISAM does not support
5>. InnoDB does not support full-text indexing, and MyISAM supports it.
4 features of the 5.INNODB engine
Insertion buffer (insert buffer), two write (double write), Adaptive Hash Index (AHI), pre-read (read ahead)
6.myisam and InnoDB 2 selectcount (*) which is faster, why
MyISAM is faster because the MyISAM internally maintains a counter that can be directly adjusted.
The difference between varchar and char in 7.MySQL and the meaning of the 50 representation in varchar (50)
(1), the difference between varchar and char
Char is a fixed-length type, and varchar is a variable-length type
(2), varchar (50) The meaning of 50
Holds up to 50 characters, varchar (50) and (200) stores the same space as Hello, but the latter consumes more memory when sorting because the order by Col uses fixed_length to calculate col length (same as memory engine)
(3), int (20) The meaning of 20 means the length of the displayed character
But to add parameters, the maximum is 255, for example, it is the ID of the number of records, insert 10 data, it will show 00000000001 ~~~00000000010, when the number of characters more than 11, it will only show 11 bits, if you do not add that let it not full 11 bits on the front plus 0 parameters, It does not add 0 20 to the front to indicate a maximum display width of 20, but still accounts for 4 bytes of storage, the storage range is unchanged;
(4), why is MySQL so designed
There is no point in most applications, just a few tools are required to display the number of characters; int (1) and int (20) are both stored and computed;
8. Open questions:
A 600 million table A, a 300 million table B, through the outside Tid Association, how do you find the fastest way to meet the conditions of the No. 50000 to No. 50200 of these 200 data records.
1. If the A-table tid is self-growing and is continuous, the ID of table B is indexed
SELECT * from a b where A.tid = b.ID and a.tid>500000 limit 200;
2. If the tid of a table is not contiguous, then the overwrite index is required. The TID is either a primary key or a secondary index, and the B table ID also needs to be indexed.
SELECT * from B, (select Tid from a limit 50000,200) a WHERE b.id = A. Tid;
The difference between the 9.mysql database engine MyISAM and InnoDB
How many kinds of TRIGGERS are allowed in the 10.MYSQL table?
There are six types of triggers allowed in the MYSQL table, as follows:
· Before INSERT
· After INSERT
· Before UPDATE
· After UPDATE
· Before DELETE
· After DELETE
MySQL Database related collation