This article is reproduced from http://blog.csdn.net/wulantian/article/details/29593803
Thank the Master for his hard work.
One, safety improvement
1. Provide the method to save the encrypted authentication information, using the. mylogin.cnf file. Use Mysql_config_editor to create this file. This file can be used to connect to the database for access authorization. Mysql_config_editor will be encrypted rather than plaintext stored. The client will only decrypt in memory. The password is stored in a non-plaintext manner and is not exposed in the command line or environment variables. For more information, visit section 4.6.6, "Mysql_config_editor-mysql Configuration Utility".
2. Use Sha256_password to support a more powerful user password encryption method. This plugin is built-in. For more information visit section 6.3.6.2, "The SHA-256 authentication Plugin
3.mysql.user is represented by the addition of the Password_expired column, the default value is ' N ', and the new alter USER command can be set to ' Y '. When the password expires, subsequent connections using this account will be error-only until the user creates a new password using the Set password command. For more information visit section 13.7.1.1, "ALTER usersyntax"
4. Password Security Policy now available
? When you specify a password using clear text, the password is checked by the current password policy and is rejected if it is too weak (returns a Er_not_valid_password error). The CREATE USER, GRANT, and SET password commands are affected. The password is also checked when referenced by password (), Old_password () as a parameter.
? The strong degree of the password can be checked by the new function Validate_password_strength (). This function returns a password of 0 (weak)-100 (strong) as a parameter.
? The above are Validate_password plug-ins provided, more information see section 6.1.2.6, "The Password Validation Plugin".
? Mysql_upgrade If a hash password that was used before 4.1 is found to be warning. Such an account must be upgraded to a more secure hash password. See section 6.1.2.4, "Password Hashing in MySQL"
5. The login record is changed, so the password will no longer be written in the general Log,bin Log,slow log. See section 6.1.2.3, "Passwords and Logging"
The 6.start slave statement is modified to specify parameters. The password can be stored in the Master.info file. See section 13.4.2.5, "START SLAVE Syntax"?
Second, the default value changes
7. Starting with 5.6.6, the default value is different from the previous one, and the motivation is to provide better performance and avoid manual changes. See section 5.1.2.1, "Changes to Server Defaults".
Thirdly, InnoDB strengthens
8. You can build a full-text index on the InnoDB table, using MATCH () ... The against statement. This feature includes a new approximate search symbol @, and several new configuration items as well as the Information_schema table. See section 14.2.4.12.3, "Fulltext Indexes"?
9. Several ALTER TABLE operations no longer copy the table and do not block insert,update,delete or all write operations. This is called the online DDL. See section 14.2.2.6, "Online DDL for InnoDB Tables"
10. Under a single tablespace, you have more autonomy for. ibd files. File-per-table mode allows you to specify a directory other than the MySQL data directory when creating tables. For example, put a pressure-heavy watch on an SSD device, or put a big watch on a large HDD. You can export a table from one instance and then import another, without causing inconsistencies in the data being cached, transactions in progress, and internal factors such as space ID and LSN. See: Section 14.2.5.2.33, "Tablespace Management"
11. You can set the page size of the uncompressed table for InnoDB to 8KB or 4KB, or the default 16KB. Use the parameter innodb_page_size to configure. Specify this parameter when creating an instance. The same instance InnoDB tablespaces share this page size. A smaller page size can avoid redundant or inefficient IO for some kind of mixed pressure load, especially for SSD devices with large and small blocks.
Blackjack, Roulette, slots, Slots, Blackjack,roulette, Casino, Blackjack, Blackjack, Blackjack,slots
12. The improved adaptive flushing algorithm makes multiple workloads I/O operations more efficient and consistent. The new algorithm and default configuration expectations can improve the performance and concurrency of most users. See section 14.2.5.2.8, "Improvements to Buffer Pool Flushing"
13. You can access the Innnodb table through the NoSQL API development app. It uses the popular memcached daemon to respond to Add,set and get requests for key-value pairs. This avoids the cost of parsing and building the query execution plan. You can use the NoSQL API or SQL to access the same piece of data. For example: You can quickly access and query table data through the NoSQL API, use SQL for complex queries, and be compatible with existing applications. See section 14.2.10 for more, "InnoDB integration with memcached".
The 14.Innodb optimizer statistics are collected at a more definite interval, while the server is maintained after a reboot, making the plan stability more stable. You can also control the number of samples in the InnoDB index, making the optimizer's statistics more accurate to raise the master query execution plan. See section 14.2.5.2.9 more, "persistent Optimizer Statistics for InnoDB Tables"
15. Optimized read-only transactions, improved performance and concurrency for specific queries and report generation applications. This optimization is automatic, or you can specify the start TRANSACTION READ only parameter. See section 14.2.5.2.2 more, "Optimizations for read-only transactions"
16. You can move the InnoDB undo log out of the system tablespace to one or more independent tablespaces. The I/O mode of undo log makes it a good choice to move these tablespaces to an SSD device while placing the system tablespace on a regular disk. See MORE: Section 14.2.5.2.3, "Separatetablespaces for InnoDB Undo Logs"
The 17.Innodb redo log size is increased from a maximum of 4GB to 512GB and is configured via the parameter innodb_log_file_size.
The 18.–innodb-read-only option allows MySQL to run in read-only mode. You can access the InnoDB table through read-only media such as DVD,CD, and you can also share data directories to create multiple data warehouses. See MORE: Section 14.2.6.1, "Support for Read-only Media"
19. A number of new INFORMATION_SCHEMA tables on InnoDB provide information about buffer pool, table metadata, indexes, foreign keys in the data dictionary, and finer performance granularity information that complements the Performance schema table.
20. When multiple tables are open, InnoDB restricts the memory that holds the table information.
The 21.INNODB reinforces several internal features, including splitting kernel mutexes to reduce contention, moving flush operations out of the main thread, allowing multiple cleanup threads to be used, and reducing buffer_pool contention in large memory systems.
The 22.INNODB uses a new, faster algorithm to detect deadlocks. All deadlock information can be logged to the error log.
23. To avoid the long warmup time when the Buffer_pool instance restarts the service, you can load the cache page immediately after the reboot. MySQL can export a full data file when it is closed, and review this file to find the pages that need to be loaded when restarting. You can export buffer_pool in any manual import, such as during performance testing or after performing complex OLAP queries.
Four, partition
24. The maximum number of partitions is increased to 8192, which includes all partitions and sub-partitions.
25. Use Alter TABLE ... Exchange PARTITION can exchange partitions between non-partitioned tables and partitioned tables and child partition tables. This can be used to export the import partition. See MORE: Section 17.3.3, "Exchanging partitions and subpartitions with Tables".
26. You can display a query for one or more partitions in a partitioned table, or change the data. For example, table T has an int column C, there are 4 partition p0-03, query select * FROM T PARTITION (P0, p1) WHERE C < 5 returns only PO,P1 eligible results.
The following statement supports explicit partitioning queries
- SELECT
- DELETE
- INSERT
- REPLACE
- UPDATE
- LOAD DATA
- LOAD XML
Read more: Section 17.5, "Partition Selection".
27. Reducing the partition lock improves most DML and DDL operations on tables with many partitions, reducing the locks on partitions that are not affected by the statement. Such statements include SELECT, select ... PARTITION, UPDATE, REPLACE, INSERT, and many other statements. More, including which statements performance improvements see section 17.6.4, "Partitioning and Locking".
Five, Performance Schema
28. Record the input and output of the table, including row-level access tables and temporary tables, such as Insert,upate,delete.
29. Event filtering for a table, based on a library or table name.
30. Thread event filtering, more information about threads being collected
31. Table and index I/O and table lock statistics.
32. Record the order and the stage of the command.
33. When the service is started, configure the parameters, which can only be set at run time.
Six, copy and log
34. Support for replication based on global unified Transaction ID (GTID). Each transaction can be located and traced when a transaction is committed on the main library or is applied from the library.
35. Start replication with the –gtid-mode and–enforce-gtid-consistency parameter to turn on Gtids. For more information, see section 16.1.4.5, "Global Transaction ID Options and Variables".
36. If you are using Gtids, start a new copy from the library or switch to a new main library, you do not have to rely on the log file or the POS bit.
37.GTID replication is all transaction-based, making it very easy to check master-slave consistency. If the transactions committed on all the main libraries are also submitted to the Slave library, consistency is guaranteed.
38. See section 16.1.3 for more, "Replication with Global Transaction Identifiers".
39. Row replication now supports row image control. You can record only unique and variable columns (not all columns), which saves disk space, network traffic, and memory usage. You can use the parameter binlog_row_image, set to minimal (record the necessary columns), full (all columns), Noblob (excluding all columns of blob or text) to control the minimum or maximum records. See System variables used with the binary log for more information,
40.binlog reads and writes are now crash-safe because only complete events (or transactions) are logged and read. By default, the size of the event is logged and the event itself, using the size to verify that the event is logged correctly. You can also use the parameter binlog_checksum to set the checksum value for logging events using CRC32. Use the parameter master_verify_checksum to let the service read the checksum value. The –slave-sql-verify-checksum parameter reads the checksum value from the library when it reads the relay log.
41.MySQL supports saving the main library connection information in the table. set by using Parameters –master-info-repository and –relay-log-info-repository. Setting –master-info-repository as the table, the connection information is recorded to the Slave_master_info table. Set –relay-log-info-repository to table, log relay log information to Slave_relay_log_info table. These tables are automatically built in the MySQL system library.
Important: In order to ensure replication security, the Lave_master_info and Slave_relay_log_info tables must use transaction extensions such as InnoDB, which by default is the use of MyISAM, which means that you must change these tables to transaction breaking before starting the replication to ensure security. Use the statement alter TABLE ... Engine= ... Complete. If the copy is running, do not change
42.mysqlbinlog now supports backing up Binlog in its original format, using the option –read-from-remote-server And–raw. Mysqlbinlog connects to the server, requests the log file, writes the output file in its original format. See also: section 4.6.8.3, "Usingmysqlbinlog-to-back-Binary Log Files".
43.MySQL now supports deferred replication, which is 0 seconds by default. Use the change MASTER to parameter Master_delay to set the delay.
44. Deferred replication is used to protect against user misuse of the main library (DBAs can roll back a deferred copy from the library to the wrong operation) or test the system behavior when the system has a delay. See section 16.3.9, "Delayed Replication".
45. If replication has more than one network interface from the library, you can use only one of the parameter master_bind of the change MASTER to command.
46. Add the system parameter Log_bin_basename, specify the full path and file name, Log_bin only control whether Binlog is turned on.
The same applies to relay_log_basename.
47. Multi-threaded replication is now supported. If turned on, the SQL thread coordinates multiple worker threads as coordinator, depending on the number of slave_parallel_workers. Multi-threaded replication is now based on a library, and the relative order of updates for specific libraries is the same as the main library. However, there is no need to reconcile transactions between different libraries. Transactions can be distributed to each library, meaning that a worker thread that replicates from the library can execute the transaction sequentially without having to wait for the other library's update to complete.
48. Since the transactions of different libraries may be different in the order of the master/slave libraries, simple recently executed transactions cannot guarantee that the previous transactions were executed from the library. This has special significance for multi-threaded replication time logs and restores. For more information on how to interpret Binlog in multi-threaded replication, see section 13.7.5.35, "SHOW SLAVE STATUS Syntax"
Seven, optimizer boost
49. The query optimizer is more efficient with the following query (sub-query) Select ... From Single_table ... ORDER Bynon_index_column[desc] LIMIT [M,]n;
This class of queries that display several rows in a large result set is common in Web sites. such as: SELECT col1, ... From T1 ... ORDER by name LIMIT 10; SELECT col1, ... From T1 ... ORDER by RAND () LIMIT 15;
The sort cache is specified by sort_buffer_size. If n rows are ordered to be small enough to be in the sort cache (m+n rows if M is specified), you can avoid using the merged file, and the entire query can be put into memory. See section 8.2.1.3, "Optimizing LIMIT Queries"
50. The optimizer uses MRR. When you use a nonclustered index for index scanning, a large number of random disk accesses can result if the table is too big to cache. With MRR optimization, the optimizer scans the index first, then collects the primary key for each row and sorts the primary key, where the base table can be accessed in the primary key order. This replaces random disk access with sequential access.
51. The optimizer uses an ICP, does not have an ICP, uses an index to locate rows, and returns to the service layer, using where to discard non-conforming records. After using the ICP, if only part of the index can be exploited by the where condition, MySQL presses the Where condition to the lead layer. The breaking layer is evaluated using an index entry, which is read only if the condition is met. The ICP reduces the access to the base table by the lead layer, while reducing the service layer's access to the breaking layer. See MORE: Section 8.13.4, "Index Condition pushdown optimization"
The 52.EXPLAIN command can now be used on Delete, INSERT, replace,update statements. Previously, it could only be used on query statements. In addition, JSON format output is now supported. See section 13.8.2, "EXPLAIN Syntax"
53. The optimizer is more efficient with subqueries from the FROM clause. Subqueries are materialized in query execution to improve performance. In addition, the optimizer may create indexes on derived tables to speed up row retrieval. See MORE: Section 8.13.15.3, "Optimizing Subqueries in the FromClause (Derived Tables)"
54. Optimizer uses Semi-join and materialized to optimize subqueries, see: Section 8.13.15.1, "Optimizing Subqueries with Semi-join transformations", and sections 8.13.15.2, "Optimizing Subqueries with subquery materialization"
55. The new algorithm BKA is used to query the associated table or use the join buffer. The BKA supports inner join,outer join,semi-join, the hand nested outer and joins nested. The benefit is improved performance of the table scan. See MORE: Section 8.13.11, "Block Nested-loop and batched Key Access Joins"
56. The optimizer has a tracking function, which is useful for developers. Optimizer_trace_xxx system parameters and INFORMATION_SCHEMA. Optimizer_trace table provides interface, more see MySQL internals:tracing the OPTIMIZER.
Eight, conditional processing
57.MySQL now supports the GET Diagnostics command, which can provide diagnostic information such as why the previous SQL command has an exception, see also: section 13.6.7.3, "GET diagnostics Syntax"
58. In addition, some inefficient conditional processing has been modified to make it more like standard SQL
59. You can define which handler to select in a block scope, and a stored program may have only one global handler.
60. The condition order is resolved more accurately.
61. The diagnostic area cleanup has changed. Error #55843 causes conditional processing to be cleaned up before handler is activated, making the condition information invalid. You can now get this information using get diagnostics. The condition information is cleared when the handler is present, if it has not been cleaned up while the handler is executing.
62. Handler is activated immediately when the condition is triggered. It is now possible to select a more appropriate handler if the condition is completed and then activated. This is different for statements that trigger multiple conditions, such as when a higher priority condition is later triggered, while there are handlers in this range that can handle these conditions. Previously, only the first trigger would be selected, even if it had a low priority. Now the higher priority will be selected, even if it is not the first trigger.
See section 13.6.7.6, "Scope Rules for Handlers"
Nine, data type
63.TIME, DATETIME, and TIMESTAMP support an hour granularity of time, accurate to microseconds (6-bit). See section 11.3.6, "fractional Seconds in Time Values"
64. A maximum of one timestamp column per table previously can be initialized and updated with the current time. The limit is gone. All timestamp columns can have these 2 properties set. In addition, DATETIME supports these attributes. See section 11.3.5 more, "Automatic initialization and Updating for TIMESTAMP and DATETIME"
65. You can use the parameter Explicit_defaults_for_timestamp to turn off default values for the default previous timestamp and to automatically initialize and update properties. See section 5.1.4, "Server System Variables" For more information.
The 66.YEAR (2) type is discarded, and the year (2) column in the existing table is treated as before, but the new or modified table structure is converted to year (4). See MORE: Section 11.3.4, "year (2) Limitations and Migrating to Year (4) "
Ten, host cache
67. Now provides more information about client connection failures and improves access to the host cache, including client IP and hostname, avoiding DNS resolution.
68. The new status value CONNECTION_ERRORS_XXX provides information on connection failures and does not apply to close-up client IPs.
69. The host cache adds a count of specific IP errors, as well as a new Host_cache performance schema table. You can clearly know how many hosts are cached, the type of error each host failed to connect to, and how far the error host is from the max_connect_errors limit.
70. The host cache size is configured by the parameter host_cache_size.
See MORE: Section 8.11.5.2, "DNS Lookup optimization and the Host Cache", and section 20.9.9.1, "The Host_cache Table"
11, OpenGIS
The 71.OpenGIS definition function can test the relationship between two geographic locations. Previously, the mbr-based function returned the result of a rectangle measurement. More precise shapes are now supported. These functions add st_ prefixes, such as Contains () using rectangles, st_contains () using the object shape. See section 12.17.5.4.2 for more, "Functions that Test Spatial relationships between geometries"
12, removing parameters
72. Removing parameters
The following parameters are removed and the new parameters are displayed.
Remove –log, change to –general_log specifies whether to turn on, –general_log_file=file_name specify the file name.
Remove –log-slow-queries and log_slow_queries, open the slow query log with –slow_query_log, and specify the file name with –slow_query_log_file=file_name.
Remove –one-thread and replace with –thread_handling=no-threads.
Remove –safe-mode
Remove –skip-thread-priority
Remove –table-cache and change to Table_open_cache
Remove the –init-rpl-role and –rpl-recovery-rank options and Rpl_recovery_rank system parameters, as well as the Rpl_status status values.
Remove Engine_condition_pushdown and use Engine_condition_pushdown to identify the Optimizer_switch parameter.
Remove Have_csv, Have_innodb, Have_ndbcluster, and have_partitioning, and replace with show engines.
Remove sql_big_tables and use Big_tables.
Remove sql_low_priority_updates and use Low_priority_updates.
Remove sql_max_join_size and use Max_join_size.
Remove max_long_data_size and use Max_long_data_size.
Remove the flush master and flush SLAVE commands, replacing with reset master and reset SLAVE.
Remove the SLAVE start and SLAVE stop commands, using the start SLAVE and stop SLAVE replacements.
Remove the show AUTHORS and show contributors commands.
Remove the option and One_shot modifier from the set command.
Default is not allowed in stored procedures or in function arguments or in stored program local variables (for example: SET var_name = default command), but you can use default when specifying system variables.
Mysql 5.6 New Features (reprint)