MySQL Study Notes summary, mysql Study Notes

Source: Internet
Author: User
Tags benchmark lock queue table definition

MySQL Study Notes summary, mysql Study Notes

Slow SQL: A Query whose execution time exceeds the specified time range is called a slow query.
In MySQL, how does one record slow SQL statements?
A: You can set the following information in my. cnf:

[mysqld]; enable the slow query log, default 10 secondslog-slow-queries; log queries taking longer than 5 secondslong_query_time = 5; log queries that don't use indexes even if they take less than long_query_time; MySQL 4.1 and newer onlylog-queries-not-using-indexes

These three settings indicate that the execution time can be recorded for more than 5 seconds and queries without indexes.
Log classification in MySQL:
1. error log mysql error log
2. The bin log records the quer generated when the data is modified and stored in binary mode.
3. The mysql-bin.index record is the absolute path to record all Binary logs, ensuring that various MySQL threads can find all Binary Log files as needed.
4. slow query log records slow SQL statements. It is a simple text format and can be viewed in various text editors. It records the statement execution time, execution time, and execution user.
5. the innodb redo log records all physical changes and transaction information made by Innodb to ensure transaction security.

The SQL architecture can be divided into the SQL layer and the Storage Engine layer.
The SQL Layer contains multiple submodules:
1. initialization module
When MySQL Server is started, the initialization module performs various initialization operations on the entire system, such as buffer, cache structure initialization, and memory space application, initialization settings of various system variables, initialization settings of various storage engines, and so on.
2. Core APIs
The core API module is mainly used to provide optimization implementation of some underlying operations that require high efficiency, including implementation of various underlying data structures, implementation of special algorithms, string processing, and digital processing, small file I/O, formatted output, and the most important memory management part. All source code of the core API module is concentrated under the mysys and strings folders. Interested readers can study it.
3. Network Interaction module
The underlying Network Interaction module abstracts the interface APIs used by the underlying network interaction to receive and send the underlying network data, so that other modules can call and maintain the data. All source code is under the vio folder.
4. Client and Server interaction protocol module
Any software system with a C/S structure will certainly have its own unique information interaction protocol, and MySQL is no exception. The Client and Server interaction protocol module of MySQL implements all protocols during the interaction between the Client and MySQL. Of course, these protocols are built on existing OS and network protocols, such as TCP/IP and Unix Socket.
5. User Module
The functions implemented by the user module mainly include user login connection permission control and user authorization management. Like the door guard of MySQL, he decided whether to "Open the door" for the visitor ".
6. Access Control Module
When a visitor enters the door, what can he do? For the sake of security, it is certainly not so casual. At this time, the access control module needs to monitor every action of the guests in real time and grant different permissions to different guests. The access control module controls user access to data based on the authorization information of users in the user module and various constraints specific to the database itself. The User Module and the access control module are combined to form the permission security management function of the entire MySQL database system.
7. Connection Management, connection line and thread management
The connection management module monitors various requests to the MySQL Server, receives connection requests, and forwards all connection requests to the thread management module. Each client request connecting to the MySQL Server is allocated (or created) a connection thread to serve it separately. The main task of the connection thread is to communicate with the MySQL Server and the client, accept the client's command requests, and transmit the Server's result information. The thread management module is responsible for managing and maintaining these connection threads. Including thread creation and thread cache.
8. Query parsing and forwarding Module
In MySQL, we are used to calling all the commands sent from the Client to the Server as query. In MySQLServer, after the connection thread receives a Query from the Client, the query will be directly transmitted to the Query module responsible for classifying various queries and forwarding them to corresponding processing modules. This module is the query parsing and forwarding module. The main task is to analyze the semantics and syntax of query statements, classify them according to different operation types, and then make targeted forwarding.
9. Query Cache Module
The Query Cache module is a very important module in MySQL. Its main function is to cache the returned result set of the Select query request submitted by the client to MySQL into the memory, corresponds to a hash value of the query. After any data changes to the base table of the retrieved data in the Query, MySQL automatically invalidates the Cache of the query. In application systems with a very high read/write ratio, Query Cache significantly improves performance. Of course, it also consumes a lot of memory.
10. Query Optimizer Module
The Query optimizer, as its name implies, optimizes the query of client requests. Based on the query statements of client requests and some statistical information in the database, it analyzes the Query based on a series of algorithms, obtain an optimal policy and tell the subsequent program how to obtain the result of this query statement.
11. Table Change Management Module
The table change management module is mainly responsible for processing DML and DDL queries, such as update, delte, insert, create table, and alter table statements.
12. Table maintenance module
Table status check, error repair, and optimization and analysis are all required by the table maintenance module.
13. system status management module
The system status management module is responsible for returning various status data to users when the client requests the system status, such as the show status commands and show variables commands commonly used by DBAs, the result is returned by this module.
14. Table Manager
This module is easily confused with the table change and table maintenance modules above, but its functions are completely different from those of the change and maintenance modules. As you know, every MySQL table has a table definition file, that is, the *. frm file. The work of the table manager is to maintain these files and a cache. The main content of the cache is the structure information of each table. It also maintains table-Level Lock management.
15. Logging Module
The logging module is mainly responsible for logging the entire system-level logic layer, including error log, binarylog, and slow query log.
16. Copy Module
The Replication module can be divided into two parts: the Master module and the Slave module. The Master module is mainly responsible for reading binary logs of the Master in the Replication environment and interacting with the Slave I/O thread. The Slave module does a little more work than the Master module. It is mainly reflected in two threads in the system. One is responsible for requests from the Master and receiving binary logs, and writing them to the I/O thread in the local relay log. The other is responsible for reading related log events from relay logs, it is then parsed into the SQL thread that can be correctly executed on the Slave side and get the same result as the Master side, and then handed over to the Slave for execution.
17. Storage engine interface module
The storage engine interface module is the most distinctive feature of the MySQL database. Various database products
Basically, only MySQL can implement plug-in management of its underlying data storage engine. This module is actually just an abstract class, but it is precisely because it has successfully abstracted all kinds of data processing that makes MySQL's pluggable storage engine unique today.

Monitoring Method for MySQL performance optimization:

1. set profiling = 1 enable performance monitoring. This command is unavailable in some versions of mysql.
2. Then execute SQL
3. show profiless: view the time when the system executes the SQL statement.
4. show profile cpu, block io for query digital ID (this ID is the performance output log number in show profiles)

MySQL storage engines use three types of locking mechanisms: Row-level locking, page-level locking, and table-level locking.
In MySQL databases, table-level locking is mainly used by some non-transactional storage engines such as MyISAM, Memory, and CSV, while row-level locking is mainly used by Innodb Storage engine and NDB Cluster Storage engine, page-level locking mainly refers to the locking method of the BerkeleyDB storage engine.
 
MyISAMThe priority rules of write lock requests in read requests and write wait queues are determined by the following rules:
1. Apart from READ_HIGH_PRIORITY's read lock, the write lock in the Pending WRITE-lock queue can block all other read locks;
2. READ_HIGH_PRIORITY read lock requests can block write locks in all Pending write-lock queue;
3. In addition to WRITE lock, any write lock in the Pending WRITE-lock queue has a lower priority than read lock.

After MyISAM write locks are determined to be Current write-lock queue, all other locked requests except in the following situations will be blocked:
1. Some storage engines allow a WRITE_CONCURRENT_INSERT write lock request.
2. When the write lock is set to WRITE_ALLOW_WRITE, all read and write lock requests except WRITE_ONLY are allowed.
3. When the write lock is set to WRITE_ALLOW_READ, all read lock requests except READ_NO_INSERT are allowed.
4. When the write lock is set to WRITE_DELAYED, all read lock requests except READ_NO_INSERT are allowed.
5. When the write lock is set to WRITE_CONCURRENT_INSERT, all read lock requests except READ_NO_INSERT are allowed.
 
Considerations for Row-level locking of Innodb:
A) Try to make all data retrieval completed through indexes, so as to prevent Innodb from upgrading to table-level locking because it cannot lock through the index key;
B) design indexes reasonably so that Innodb can be as accurate as possible when locking the index key and minimize the lock scope to avoid unnecessary locks and affecting the execution of other queries;
C) Minimize the range-based data search and filtering conditions to avoid locking records that should not be locked due to the negative impact of gap locks;
D) control the transaction volume as much as possible to reduce the number of locked resources and lock duration;
E) when the business environment permits, use lower-level transaction isolation as much as possible to reduce the additional costs of MySQL due to transaction isolation level;
How to view table-Level Lock information in MyISAM:
A: show status like '% table_locks %'
Table_locks_immediate: The displayed number is the number of locks.
Table_locks_waited: The displayed number is the number of waiting times when table-Level Lock contention occurs.
 
How to View row-level locking information of Innodb:
A: show status like '% Innodb_rows %'
The row-level locking status variables of Innodb not only record the number of lock waits, but also record the total lock duration, average each time length, and maximum length, in addition, there is a non-cumulative status that shows the number of pending locks. The Status values are described as follows:
● Innodb_row_lock_current_waits: number of pending locks;
● Innodb_row_lock_time: the total time from system startup to current locking;
● Innodb_row_lock_time_avg: Average time spent on each wait;
● Innodb_row_lock_time_max: The most common time spent from system startup till now;
● Innodb_row_lock_waits: Total number of waiting times after the system is started;
 
Mysqlslap is a stress testing tool officially provided by mysql. The following are important parameters:
-Defaults-file: configuration file storage location
-Concurrency: concurrency
-Engines, Engine
-Iterations: number of iterations
-Socket: socket File Location
Automatic Test:
-Auto-generate-SQL: automatically generates test SQL
-Auto-generate-SQL-load-type: test the SQL type. Types include mixed, update, write, key, and read.
-Number-of-queries: Total number of executed SQL statements
-Number-int-cols: number of int columns in the table
-Number-char-cols: number of char columns in the table
For example:
Shell> mysqlslap-defaults-file =/u01/mysql1/mysql/my. cnf-concurrency = 50,100-iterations = 1-number-int-cols = 4-auto-generate-SQL-load-type = write-engine = myisam- number of-queries = 200-S/tmp/mysql1.sock
Benchmark
Running for engine myisam
Average number of seconds to run all queries: 0.016 seconds
Minimum number of seconds to run all queries: 0.016 seconds
Maximum number of seconds to run all queries: 0.016 seconds
Number of clients running queries: 50
Average number of queries per client: 4
Benchmark
Running for engine myisam
Average number of seconds to run all queries: 0.265 seconds
Minimum number of seconds to run all queries: 0.265 seconds
Maximum number of seconds to run all queries: 0.265 seconds
Number of clients running queries: 100
Average number of queries per client: 2
Test of the specified database:
-Create-schema: Specifies the database name.
-Query: Specifies an SQL statement to locate a file containing SQL.
For example:
Shell> mysqlslap-defaults-file =/u01/mysql1/mysql/my. cnf-concurrency = 25, 50-iterations = 1-create-schema = test-query =/u01/test. SQL-S/tmp/mysql1.sock
Benchmark
Average number of seconds to run all queries: 0.018 seconds
Minimum number of seconds to run all queries: 0.018 seconds
Maximum number of seconds to run all queries: 0.018 seconds
Number of clients running queries: 25
Average number of queries per client: 1
Benchmark
Average number of seconds to run all queries: 0.011 seconds
Minimum number of seconds to run all queries: 0.011 seconds
Maximum number of seconds to run all queries: 0.011 seconds
Number of clients running queries: 50
Average number of queries per client: 1
 
Restrictions on index usage in MySQL:
1. The total length of the index key of the MyISAM storage engine cannot exceed 1000 bytes;
2. BLOB and TEXT columns can only create prefix indexes;
3. Currently, MySQL does not support function indexing;
4. Use not equal (! = Or <>), MySQL cannot use indexes;
5. MySQL cannot use indexes after the filter Fields use function operations (such as abs (column;
6. MySQL cannot use indexes when the Join condition fields in the Join statement have different types;
7. When using the LIKE operation, if the condition starts with a wildcard ('% abc...'), MySQL cannot use the index;
8. When non-equivalent queries are used, MySQL cannot use Hash indexes;
Currently, MySQL can use two algorithms to sort data:
1. retrieve the fields used for sorting conditions that meet the filtering conditions and the row pointer information that can be directly located to the row data, and perform the actual sorting operation in SortBuffer, then, the sorted data is used to return data from other fields requested by the client to the table based on the row pointer information, and then returned to the client;
2. retrieve the data of sorting fields and all other fields requested by the client at a time based on the filter conditions, and store the fields that do not need to be sorted in a memory area, then, Sort the sorting field and row pointer information in Sort Buffer, and then use the sorted row pointer to match and merge the result set with the row pointer information stored in the memory area and other fields, then return to the client in order.
 
Description of various information displayed in the MySQL Explain function:
◆ ID: The serial number queried in the execution plan selected by Query Optimizer;
◆ Select_type: the query type used. The following query types are used:
◇ Dependent subquery: The first select in the subquery layer, depending on the result set of the external query;
◇ Dependent union: the UNION in the subquery, which is all the SELECT statements after the second SELECT statement in the UNION statement, also depends on the result set of the external query;
◇ PRIMARY: the outermost query in the subquery. Note that it is not a PRIMARY key query;
◇ SIMPLE: Except subqueries or UNION queries;
◇ SUBQUERY: The first SELECT statement in the SUBQUERY's inner layer. The result does not depend on the external query result set;
◇ Uncacheable subquery: subqueries that cannot be cached in the result set;
◇ UNION: In a UNION statement, all the SELECT statements start with the second SELECT statement. The first SELECT statement is PRIMARY.
◇ Union result: Merge RESULT in UNION;
◆ Table: displays the name of the Table in the Database accessed in this step;
◆ Type: indicates the access method used by the table, which mainly includes the following centralized types;
◇ All: full table Scan
◇ Const: Read a constant, and only one record can be matched at most. Because it is a constant, you only need to read it once;
◇ Eq_ref: a maximum of one matching result can be found. It is generally accessed through a primary key or a unique key index;
◇ Fulltext:
◇ Index: full index Scan;
◇ Index_merge: two (or more) indexes are used in the query, and then the index results are merge before reading table data;
◇ Index_subquery: the combination of returned fields in a subquery is an index (or an index combination), but it is not a primary key or a unique index;
◇ Rang: index range scanning;
◇ Ref: queries referenced by the drive table index in the Join statement;
◇ Ref_or_null: The only difference from ref is that a query with a null value is added in addition to the index reference query;
◇ System: A system table with only one row of data;
◇ Unique_subquery: the combination of returned fields in a subquery is a primary key or a unique constraint;
◆ Possible_keys: The index available for the query. If no index is available, it is displayed as null. This item is very important for index adjustment during optimization;
◆ Key: The index selected by MySQL Query Optimizer from possible_keys;
◆ Key_len: length of the index key selected for use;
◆ Ref: Used to list whether a constant (const) or a table field (if join) is used to filter (key;
◆ Rows: Number of result set records estimated by MySQL Query Optimizer based on the statistical information collected by the system;
◆ Extra: query the Extra details of each step, which may include the following:
◇ Distinct: query the distinct value. Therefore, after mysql finds the first matching result, it will stop querying the value and convert it to the query of other values;
◇ Full scan on NULL key: An Optimization Method in subqueries, mainly used when null values cannot be accessed through indexes;
◇ Impossible WHERE noticed after reading const tables: MySQL Query Optimizer identifies Impossible results by collecting statistics;
◇ No tables: the Query statement uses from dual or does not contain any FROM clause;
◇ Not exists: In some left connections, MySQL Query Optimizer can partially reduce the number of Data Accesses by changing the composition of the original Query;
◇ Range checked for each record (index map: N): According to the description in the MySQL official manual, when MySQL Query Optimizer does not find any available indexes, if you find that the column values from the preceding table are known, some indexes may be available. For each row combination in the preceding table, MySQL checks whether the range or index_merge access method can be used to obtain rows.
◇ Select tables optimized away: When we use some Aggregate functions to access a field with an index, mySQL Query Optimizer will directly locate the required data rows through the index to complete the entire Query. Of course, the premise is that no group by operation is allowed in the Query. For example, when MIN () or MAX () is used
Hou;
◇ Using filesort: when our Query contains the order by operation and the index cannot be used to complete the sorting operation, MySQL Query Optimizer has to select the corresponding sorting algorithm.
◇ Using index: you only need to obtain all the required data in the Index instead of the data in the table;
◇ Using index for group-by: data access is the same as Using index. You only need to read the required data. When the group by or DISTINCT clause is used in a Query, if the group field is also indexed, the information in Extra will be Using index for group-;
◇ Using temporary: when MySQL must use a temporary table in some operations, Using temporary will appear in Extra information. It is common in group by and order by operations.
◇ Using where: if we do not read all the data in the table, or we can retrieve all the required data through indexes, the Using where information will appear;
◇ Using where with pushed condition: This is a message that appears only in the NDBCluster storage engine. In addition, you must enable the Condition Pushdown optimization function to be used. The control parameter is engine_condition_pushdown.
 
What is a loose index?
A: In fact, when MySQL uses index scan to implement group by, it does not need to scan all the index keys that meet the conditions to complete the operation.
To use a loose index scan to implement group by, you must meet at least the following conditions:
◆ The group by condition field must be in the first consecutive position in the same index;
◆ When using group by, only the MAX and MIN Aggregate functions can be used;
◆ If a field condition other than the group by condition in the index is referenced, it must exist as a constant;

Why is loose index scanning very efficient?
A: When there is no WHERE clause, that is, the number of keys to be read by the loose index scan must be the same as the number of groups in the group, that is to say, it is much less than the actual number of key values. When the WHERE clause contains a range limit or an equivalent expression, the loose index scans 1st keywords in each group that meet the range conditions and reads as few keywords as possible again.
 
What is compact index?
A: The difference between the compact index scan and the loose index scan is that it needs to read all the matching index keys when scanning the index, then, the group by operation is completed based on the read data to obtain the corresponding result.

There are two Optimization Methods for MySQL to process group:
1. Try to allow MySQL to use indexes to perform group by operations. Of course, it is best to use loose index scanning. If the system permits, we can adjust the index or Query to achieve the goal;

2. when you cannot use an index to complete group by, you must have enough sort_buffer_size for MySQL to use a temporary table and filesort, do not perform the GROUPBY operation on the large result set whenever possible, because if the size of the temporary table exceeds the size set by the system, the temporary table data will be copied to the disk before the operation, at this time, the performance of the sorting group operation will be reduced by an order of magnitude;
 
DINSTINCT is similar to the group by principle and can also use loose indexes.
 
Note on MySQL Schema Design Optimization:
1. Moderate Redundancy
2. Vertical Split of large fields
3. Horizontal Split of large tables
 
Time Field Type: timestamp occupies 4 bytes, and datetime and date occupy 8 bytes. However, timestamp can only be used for records after January 1, 1970. datetime and date can start from January 1, 1001.
 
MySQL binlog log optimization solution:

Binlog parameters and Optimization Strategies
First, let's take a look at the Binlog parameters. You can obtain the Binlog parameters by executing the following command. Of course, the innodb_locks_unsafe_for_binlog parameter unique to the Innodb Storage engine is also displayed:
Mysql> show variables like '% binlog % ';
+ -------------------------------- + ------------ +
| Variable_name | Value |
+ -------------------------------- + ------------ +
| Bin log_cache_size | 1048576 |
| Innodb_locks_unsafe_for_binlog | OFF |
| Max_binlog_cache_size | 4294967295 |
| Max_binlog_size | 1073741824 |
| Sync_binlog | 0 |
+ -------------------------------- + ------------ +
"Binlog_cache_size": the cache size of the binary log SQL statement during the transaction process. The binary log cache is the memory allocated to each client on the premise that the server supports the transaction storage engine and the server enables the binary log (-log-bin option). Note, yes. Each Client can allocate binlog cache space of the set size. If the reader's friend's system often shows the trend of Multi-statement transactions, you can try to increase the value to achieve better performance. Of course, we can use the following two state variables of MySQL to determine the current status of binlog_cache_size: Binlog_cache_use and Binlog_cache_disk_use. "Max_binlog_cache_size": corresponds to "binlog_cache_size", but represents the maximum cache memory size that binlog can use. When we execute Multi-statement transactions, if max_binlog_cache_size is not large enough, the system may report the "Multi-statement transaction required more than 'max _ binlog_cache_size 'bytes ofstorage" error.
"Max_binlog_size": maximum value of Binlog logs. Generally, it is set to 512 M or 1G, but cannot exceed 1G. This size does not strictly control the Binlog size. Especially when a large transaction arrives near the end of the Binlog, the system ensures the integrity of the transaction, it is impossible to switch logs. You can only record all SQL statements of the transaction into the current log until the transaction ends. This is a little different from the Redo log of Oracle, because the Redo log of Oracle records changes in the physical location of the data file and records the Redo and Undo information at the same time, therefore, whether a transaction is in a log is not critical to Oracle. MySQL records database logic changes in the Binlog. MySQL calls this Event as a Query statement such as DML that brings database changes. "Sync_binlog": this parameter is crucial for the MySQL system. It not only affects the performance loss caused by Binlog on MySQL, but also affects the data integrity in MySQL. The settings of the "sync_binlog" parameter are described as follows:
● Sync_binlog = 0. After the transaction is committed, MySQL will not refresh the information in binlog_cache to the disk using the Disk Synchronization commands such as fsync, And the Filesystem will decide when to synchronize the data, or, after the cache is full, it is synchronized to the disk.
● Sync_binlog = n. After each transaction is committed n times, MySQL performs a Disk Synchronization command such as fsync to forcibly write data in binlog_cache to the disk. In MySQL, sync_binlog is set to 0 by default, that is, no mandatory disk refresh command is performed. At this time, the performance is the best, but the risk is also the biggest. Because once the system Crash, all binlog information in binlog_cache will be lost. When it is set to "1", it is the safest but the biggest loss of performance. When set to 1, even if the system Crash is used, a transaction not completed in binlog_cache can be lost at most, without any material impact on the actual data. From past experience and related tests, for high-concurrency transaction systems, the system write performance gap between "sync_binlog" and "1" may be as high as 5 times or more.
 
Negative effects of MySQL QueryCache:
A) the hash operation of the Query statement and the consumption of hash search resources. After we use Query Cache, after each SELECT-type Query arrives at MySQL, a hash operation is required to check whether the Query Cache exists, although the hash algorithm may have been very efficient, the hash search process has been optimized enough. For a Query, the resources consumed are indeed very small, however, when we have thousands or even thousands of queries per second, we cannot completely ignore the CPU consumption.
B) invalid Query Cache. If our tables are frequently changed, the Query Cache becomes very inefficient. Table changes here refer not only to changes to table data, but also to any changes to the structure or index. That is to say, the Cache data cached in the Query Cache may be cleared soon after it is saved because the data in the table is changed, after the new same Query comes in, the previous Cache cannot be used.
C) The Result Set is cached in the Query Cache, rather than the data page. That is to say, the same record may be cached multiple times. This results in a transitional consumption of memory resources. Of course, some people may say that we can limit the QueryCache size. Yes, we can indeed limit the size of the Query Cache, but in this way, the Query Cache can easily be swapped out due to insufficient memory, resulting in a decrease in the hit rate.
 
In a transient connection application system, the value of thread_cache_size should be set to a relatively large value and should not be smaller than the actual number of concurrent requests sent by the application system to the database.
 
Through system settings and analysis of the current status, we can find that the thread_cache_size settings are sufficient, and even far greater than the system needs. Therefore, we can appropriately reduce the thread_cache_size setting, for example, to 8 or 16. Based on the two system status values Connections and Threads_created, we can also calculate the ThreadCache hit rate of the new system connection, that is, the ratio of the number of connected threads obtained through the Thread Cache pool to the total number of connections received by the system, as follows:
Threads_Cache_Hit = (Connections-Threads_created)/Connections * 100%
In general, when the system runs stably for a period of time, we should keep the Thread Cache hit rate at around 90% or a higher ratio to be normal. We can see that the Thread Cache hit rate in the above environment is basically normal.
 
How to view the number of tables opened by MySQL:
Mysql> show status like 'open _ tables ';
+ --------------- + ------- +
| Variable_name | Value |
+ --------------- + ------- +
| Open_tables | 6 |
+ --------------- + ------- +
 
MySQL buffer considerations
Join_buffer_size and sort_buffer_size are for the buffer size of each thread, rather than the Buffer shared by the entire system.
 
Assume that a host is used separately for MySQL. The total physical memory size is 8 GB, the maximum number of connections to MySQL is 500, and the MyISAM storage engine is also used, how should we allocate the overall memory?
The memory is allocated to the following parts:
A) used by the system. Assume that 800 M is reserved;
B) The thread is exclusive, about 2 GB = 500*(1 MB + 1 MB + 1 MB + kb + KB). The composition is roughly as follows:
Sort_buffer_size: 1 MB
Join_buffer_size: 1 MB
Read_buffer_size: 1 MB
Read_rnd_buffer_size: 512KB
Thread_statck: 512KB
C) MyISAM Key Cache, which is assumed to be approximately 1.5 GB;
D) Maximum Innodb Buffer Pool: 8 GB-800 MB-2 GB-1.5 GB = 3.7 GB;

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.