Countermeasures against a single MYSQL table exceeding 4 GB

Source: Internet
Author: User
Tags what file system

Question: The prompt "The table is full" appears when you reply to The Forum, which literally means that The data table is full. Since few developers encounter a single table that exceeds 4 GB, discussions between friends can only provide peripheral information. To solve this problem, I have read a lot of information. This article will introduce the Causes and Countermeasures of this problem based on the solution process.
Based on experience, The table is full prompts often occur in The following two situations:

1. The MAX_ROWS value is set in the table. In short, this error occurs if MAX_ROWS is set to 100 and the program tries to write 101st records.

2. The table is full. This is the focus of this article.

We believe that MySQL has a rule of location allocation when accessing tables. By default, this rule can address data within 4 GB. If the size exceeds this value, the database cannot locate the data and therefore cannot read or write the data. After experiments, this restriction can be completely broken.

In this example, the user's system environment is dual Athlon processor, SCSI hard drive 72 GB, 2 GB memory, the user's post table data size is 4294963640, close to 4 GB (the actual number of bytes for 4 GB is 4294967296 ).

First, log on to the SSH client and view the user's system information:

# Uname-

Linux zichen.com 2.4.20-8smp #1 SMP Thu Mar 13 16:43:01 EST 2003 i686 athlon i386 GNU/Linux

It turns out to be a Linux system. Based on the kernel version 2.4.20-8smp and the common system used in China, it is estimated that it should be the redhat 9 release package.

# Cat/etc/* release *

Red Hat Linux release 9 (Shrike)

This proves our conjecture about the system version.

Then let's take a look at what file system is used. Because this user is not a master, it is estimated that the system will be loaded with a carriage return. The default redhat 9 should be EXT3, but let's take a look:

# Parted

GNU Parted 1.6.3

Copyright (C) 1998,199 9, 2000,200 1, 2002 Free Software Foundation, Inc.

This program is free software, covered by the GNU General Public License.

This program is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty

MERCHANTABILITY or fitness for a particle PURPOSE. See the GNU General Public License for more details.

Using/dev/sda

Information: The operating system thinks the geometry on/dev/sda is 8942/255/63. Therefore, cylinder 1024 ends at 8032.499 M.

(Parted) print

Disk geometry for/dev/sda: 0.000-70149.507 megabytes

Disk label type: msdos

Minor Start End Type Filesystem Flags

1 0.031 101.975 primary ext3 boot

2 101.975 10103.378 primary linux-swap

The proof is true. Then we looked at the related technical parameters of the EXT3 file system, and EXT3 evolved on the basis of EXT2. The maximum length of a single file supported by EXT2 is 2 GB, which is a very bad limit. One major improvement of EXT3 is to enlarge the limit to 2 TB, which makes it a little loose, at least not the limitation on the operating system.

After a friend's guide, I learned that the size of a single file has the following factors:

1. Restrictions on the file system (such as the 2 TB limit of EXT3 mentioned in the storage just now)

2. maximum size of the first file that a program process can access (for example, the maximum size that apache can access in Linux EXT3 is 2 GB, such as logs)

A preliminary judgment of the bottleneck lies in the second of the above. Then find myisamchk to display the table information, which proves that the bottleneck lies in the access of MySQL itself.

# Myisamchk-dv cdb_posts

The result is not pasted. One of the values of Max datafile length is exactly 4 GB. This creates a bottleneck.

Later, I read N more materials, tried N more, and took a lot of detours. Finally, I felt that the official documentation was relatively reliable. The old document states that this is caused by the value of tmp_table_size, and the BIG-TABLES parameter is also mentioned. It turns out that these are all going astray. The evening is really tired. Here we will only provide the final solution, so the middle will not be too long.

Enter the mysql client.

# Mysql-uroot-p

Enter password :******

Welcome to the MySQL monitor. Commands end with; or \ g.

Your MySQL connection id is 59411 to server version: 4.0.18-standard

Type 'help; 'or' \ H' for help. Type '\ C' to clear the buffer.

Mysql> use ******

Database changed

Mysql> alter table cdb_posts MAX_ROWS = 1000000000 AVG_ROW_LENGTH = 15000;

Because this table is very large, it took 30 minutes to run on the professional dual Athlon server!

Then you can use myisamchk to view the table information:

# Myisamchk-dv cdb_posts

MyISAM file: cdb_posts

Record format: Packed

Character set: latin1 (8)

File-version: 1

Creation time: 2004-08-30 22:19:48

Recover time: 22:42:47

Status: open, changed

Auto increment key: 1 Last value: 1063143

Data records: 619904 Deleted blocks: 5

Datafile parts: 619909 delete data: 323872

Datafile pointer (bytes): 6 Keyfile pointer (bytes): 4

Datafile length: 4295287332 Keyfile length: 40421376

Max datafile length: 281474976710654 Max keyfile length: 4398046510079

Recordlength: 149

Table description:

Key Start Len Index Type Rec/key Root Blocksize

1 1 4 unique unsigned long 1 4535296 1024

2 5 2 multip. unsigned short 13776 12540928 1024

3 111 4 multip. unsigned long 1 18854912 1024

4 28 3 multip. uint24 18 24546304 1024

5 7 3 multip. uint24 7 32827392 1024

111 4 unsigned long 1

6 7 3 multip. uint24 7 40418304 1024

28 3 uint24

An exciting thing happened. The Max datafile length of the table: 281474976710654 Max keyfile length: 4398046510079, that is, the maximum data size (MYD file) reaches 2 TB, and the maximum index size (MYI) it is still 4 GB.

Therefore, the default 4G limitation has been broken through. The principle is actually very simple: Suppose you have a notebook with 10 pages on it, and the orchestration directory only needs 1 byte (because 0 ~ 9 is enough ). If you put this book into two more sheets of paper and turn it into 12 pages, the one-byte directory space will not be able to address the next two pages, resulting in an error. The values in the above ALTER statement are the large values I have taken to ensure the success (because ALTER is too slow once and there is no time to perform a random test), which is equivalent to telling the database, this book has 1000000000 pages and each page has an average of 15000 bytes. In this way, the database will know that this is a huge book, so it spared no effort to come up with 100 pages (assuming) for directory arrangement, in this way, the new directory can address all the contents in the diary. The error disappears.

The only drawback is that the directory occupies a little more space, but the size of the 4G file is only increased by 1 MB, which is very exciting.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.