MySQL Database common storage engine (i)

Source: Internet
Author: User

Friends who are familiar with MySQL database, will certainly like MySQL powerful plug-in storage engine, can support too many storage engines, when the current storage engine does not meet your needs, you can choose the right engine according to your own needs, copy the relevant files to the relevant path, and do not need to restart the database, You can use it. It's really powerful.

1 Common storage engines
Memory Storage Engine

The table structure information is stored on the hard disk, in the format. FRM, data is stored in memory
Formats such as BLOB text are not supported
Create a table structure,
Support table lock
Support for B-tree indexes and hash indexes
Supports data cache data cache
Fast insertion speed
The memory allocated to the Memories engine table is not freed, held by the table, deleted data will not be recycled, and will be used by newly inserted data
CSV Storage Engine
All columns must be set to not NULL
CSV engine does not support indexing does not support partitioning
File format. FRM table structure Information
. CSV is the data file is the actual data
. The status of the CSM error table and the data in the table
You can directly change the. csv file change data, check table checks repair table (Note: You can use repair table to load data after manually changing the. csv file)

For example:

  #创建表结构   Storage engine for csv create  table  csv2  (Id int not null,  name char ( not null default  "ZN")  engine=csv  charset   utf8;  #检查表结构:  mysql> desc  csv1;+-------+----------+------+-----+---------+- ------+| field | type     | null | key |  default | extra |+-------+----------+------+-----+---------+-------+| id     | int (one)   | NO   |     | NULL     |       | |  name  | char ( | NO   |     | ZN)       |       |+-------+----------+------+--- --+---------+-------+2 rows in set  (0.02 sEC) #插入数据mysql > insert  into  csv1 values (3, ' Linux '), ("MYSQL"); query ok, 2 rows affected  (0.05 sec) records: 2  duplicates:  0  warnings: 0mysql> insert  into  csv1 values (9, ' Linux '), (8, " MYSQL "); query ok, 2 rows affected  (0.00 sec) records: 2  duplicates:  0  warnings: 0mysql> select  *  from csv1;+----+-------+|  id | name  |+----+-------+|  3 | linux | |  20 | mysql | |   9 | linux | |   8 | mysql |+----+-------+4 rows in set  (0.00 sec) #手动更改文件  vim  /var/lib/mysql/test/csv1. csv  # (RPM Package installation path   Other paths according to your own installation situation) 8, "MYSQL" 9, "Linux", "Docker", "Baidu", "OpenStack" 155, " Facebook "121," ansible "#检查表mysql > check   table  csv1;+-----------+-------+----------+----------+| table      | op    | msg_type | msg_text |+--------- --+-------+----------+----------+| test.csv1 | check | error    |  corrupt  |+-----------+-------+----------+----------+1 row in set  (0.03  SEC) #修复表mysql > repair   table  csv1;+-----------+--------+----------+-- --------+| table     | op     | msg_type  | msg_text |+-----------+--------+----------+----------+| test.csv1 | repair  | status   | ok       |+-----------+-------- +----------+----------+1 row in set  (0.05 sec) #检查修复mysql > check    table  csv1;+-----------+-------+----------+----------+| table     | op     | msg_type | msg_text |+-----------+-------+----------+----------+|  test.csv1 | check | status   | OK        |+-----------+-------+----------+----------+1 row in set  (0.03 sec) #检查数据   Change Effective mysql> select  *  from csv1;+-----+-----------+| id   | name      |+-----+-----------+|   9 |  linux     | |   99 | docker    | |  200 | baidu     | |   44 | openstack | |  155 | facebook  | |  121 | ansible   |+-----+-----------+

Note : The check statement checks the CSV file for the correct delimiter, the data column and the defined table structure are the same, found that the illegal line throws an exception, when using the repair, will try to copy legitimate data from the current CSV file, clear illegal data, but need to be aware When the repair finds that there are corrupted record lines in the file, all subsequent data is lost, whether legitimate or not.

ARCHIVE Storage Engine
Application Scenario Archiving
Columns that support a large number of data compression inserts are compressed, and the ARCHIVE engine uses the Zlib lossless data compression algorithm
You can also use the Optimze table to parse tables and package them into smaller formats
Statements that support only INSERT, UPDATE statements , and no Delete replace update truncate can support order by operations BLOB column type
row-level locks are supported but indexes are not supported
Archive engine table file. frm definition file. Arz data file, an. arn file with an extension may also appear when you perform an optimization operation.

Simple test:

Create a table for the MyISAM storage engine, insert the data, and then create a table of the archive storage engine to insert data and check the size of its storage space.

#创建测试表和相关的数据mysql > create  table  archive2  engine=myisam  as   select  TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,COLUMN_NAME from   Information_schema.columns; query ok, 3362 rows affected  (0.10 sec) Records: 3362  duplicates:  0  warnings: 0mysql> select  count (*)  from  archive2;+-- --------+| count (*)  |+----------+|     3362 |+----------+1 row  in set  (0.00 sec) #继续再插入数据 (executed multiple times) mysql> insert into  archive2   select  * from archive2; query ok, 107584 rows affected  (0.23 sec) #检查数据量mysql > select   COUNT (*)  from  archive2;+----------+| count (*)  |+----------+|   860672  |+----------+1 row in set  (0.00 SEC) #检查数据大小mysql > show  table  status  like  "Archive2" \g;****  1. row ***************************            name: archive2         engine:  myisam        version: 10     row_ format: dynamic           rows: 860672  avg_row_length: 53    data_length: 45790208max_data_length: 281474976710655    index_length: 1024      data_free: 0 auto_ increment: null    create_time: 2017-05-16 13:35:26     update_time: 2017-05-16 13:38:14     check_time: null      &nbsP Collation: gbk_chinese_ci       checksum: null create_ options:         comment: 1 row in set  ( 0.00 SEC) error: no query specified# data size 45790208 Newly created table for archive type of storage engine mysql> create   table  archive3 engine=archive as  select  * from   archive2; query ok, 860672 rows affected  (2.69 sec) records: 860672   Duplicates: 0  warnings: 0mysql> select  count (*)  from   archive3;+----------+| count (*)  |+----------+|   860672 |+----------+1 row  in set  (0.11 sec) Check size mysql> show  table  status   like  "Archive3" \g;*************************** 1. row ***************************            name: archive3          Engine: ARCHIVE        Version: 10      Row_format: Compressed            rows: 860672 avg_row_length: 6    data_length: 5801647max_data_ Length: 0   index_length: 0      data_free: 0  auto_increment: null    create_time: null    update_ time: 2017-05-16 13:42:35     check_time: null       Collation: gbk_chinese_ci       Checksum: NULL  Create_options:         comment: 1 row in set   (0.00 sec) Size: 5801647 

Compared with the results of 8 times times the storage value, the gap is still very large.

Blackgole Storage Engine
is a very special storage engine, just write, but regardless of the storage, although can be like other storage engine to accept the data, but all the data will not be saved, Blackgole storage engine is always empty, a bit like Linux under the/dev/null.

#创建表试试mysql > CREATE TABLE Black engine=blackhole as SELECT * from Archive2; Query OK, 860672 rows affected (0.65 sec) records:860672 duplicates:0 warnings:0mysql> select * from black; Empty Set (0.00 sec) mysql> INSERT INTO black select * from Archive2; Query OK, 860672 rows affected (0.62 sec) records:860672 duplicates:0 warnings:0mysql> select * from black; Empty Set (0.00 sec) Many times the test found that it was really so magical, insert everything succeeded, but just can't find data, very magical storage engine it? Look

Multiple tests, the result is so magical, the insertion is successful, but just can't find the data, this storage Engine magic Bar, look at this magical storage engine what is the use of it??

1, although the Blackhole storage engine does not save the data, but the Binlog is enabled, the executed SQL statement is actually logged, which means it can be copied to the slave side. Such as:



650) this.width=650; "src=" Https://s5.51cto.com/wyfs02/M00/95/DF/wKiom1kap4qAYX0_AAEP2RikXpc506.png "title=" Blackhole Storage Engine Application "alt=" Wkiom1kap4qayx0_aaep2rikxpc506.png "/>

In combination with the replicete-do-* or reolicate-ignore-* rules in the replication feature, the filtering of the log can be achieved by this ingenious design, the same write can be achieved, but the data between the master and slave is inconsistent.

The insert trigger in the Blackhole object is triggered by the standard, but because the Blackhole object is empty, update and delete are absolutely not capable of triggering and have no effect on the for every raw statement in the trigger.

Other application scenarios:

Other applications:
Verify the dump file syntax
Evaluate the impact of binary logs on the load by comparing the performance of the boot-level disable binary log file.
Blackhole Storage Engine supports transactions, commit transactions are written to binary logs but rollback does not
Blackhole storage engine with self-increment columns
The Blackhole engine is a no-op no-action engine, all operations on Blackhole objects are ineffective, so long need to consider the self-increment column behavior, the engine does not automatically increase the self-increment column value, and actually does not save the state of the self-increment field, this is important for replication.

Consider the following replication scenarios
1, Master side blackhole table has a self-increasing primary key column
2. Slave side table storage engine is MyISAM
3, the master side of the Table object insert operation does not explicitly know the column value of the self-increment column
In this scenario, the duplicate key error of the primary key column appears at the slave end of the scene, and in the copy (SBR) mode of the statement, the insert_id of each insert event is the same, so the replication triggers the error of inserting the duplicate key.
In row-based replication mode, the engine returns a column value that is always the same, and an error that attempts to insert the same value is present on the slave side.

MySQL's plug-in storage engine is a feature-rich, but also suitable for use in the application scenarios, when you dig deeper into its principles, to play a better performance of MySQL.

This article is from the "Keep Dreaming" blog, please be sure to keep this source http://dreamlinux.blog.51cto.com/9079323/1926304

MySQL Database common storage engine (i)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.