2Gb or not 2gb-file limits in Oracle (part I)

Source: Internet
Author: User
Tags date file size header require versions metalink backup
Oracle
2Gb or not 2gb-file limits in Oracle



Translation: Kamus (Seraphim)

Correction: Bloomit

Mail: kamus@itpub.net

Date: 2004-1



It is often heard that when importing the export, when the backup is restored, when the Sql*loader imports the data, the file exceeds the 2G size, resulting in an error.

I graduated from the arts, what binary, hexadecimal, data structure, operating system, etc. have not learned, so the problem has always been only a vague understanding, today in the metalink hanging above, suddenly found this article, Hing caused, decided to take a look at, after reading, Feeling for this problem to understand a lot, so the way translated into Chinese, I hope to help you.

For all the other notes and bugs mentioned in this article have also done as much as possible (not translated, if there is time to do it later), but the article mentioned some of the bugs due to my Metalink account limit or the bug is not open, can not see the detailed description, Such bugs can no longer be sorted.



Original source: http://metalink.oracle.com

Doc id:note:62427.1 Original Date Created: 1998-9-2 original Date Last updated: 2003-8-6



Introduced



This article describes the "2Gb" issue, explaining why 2Gb is such a magical number, and if you want to use more than 2Gb size files in Oracle Applications, this article also tells you something you should know.



This article is based on the UNIX operating system, because most of the 2Gb problems occur on UNIX, and of course, some information about other non-UNIX operating systems is mentioned, and the limitations of each operating system are listed in the last section of this article.



The topics in this article include the following points:

L Why 2Gb is a special number?

How to use more than 2Gb (2gb+) files?

L Export (exports) and 2Gb

L Sql*loader and 2Gb

L Oracle and other 2Gb issues

L "Large file" on different operating systems (Large files)



Why is 2Gb a special number?



Many of the CPUs and APIs currently in use use the 32-bit word length, which affects many operations.



A number of occasions the standard API for file manipulation uses signed 32-bit characters (32-bit signed word) for handling both the file size and the current position in the file. A signed 32-bit word with the highest number of positive and negative, so there are only 31 bits to store the real value, and the 16 in the 31-bit stored in the maximum positive value is 0x7fffffff, that is, 10 in the system +2147483647, which is a near 2Gb.



2Gb or larger files are generally referred to as "big files", and when you use 2147483647 or more of the numbers in a 32-bit environment, you are likely to encounter some problems. To address these issues, the latest operating system has redefined a series of system functions that fully utilize 64-bit addressing to manipulate file sizes and offsets. The latest Oracle distributions have also used these new interfaces, but you still have a lot of questions to consider before you decide to use the big files.



Another special number is 4Gb. That is, the hexadecimal digit 0xFFFFFFFF (decimal is 4294967295) as the unsigned word (unaigned value), which is a value slightly less than 4Gb. Adding this value will make the lower 4-bit byte 0x00000000 and produce ' 1 ' rounding. This carry will be lost in the 32-bit operation. So 4Gb is also a special number that can cause problems. This issue has also been mentioned in this article.



What does this mean for using Oracle?



32bit issues have affected Oracle in a number of ways, and in order to use "large files," You need to meet the following criteria:

1. An operating system or a bare device (raw devices) that supports 2gb+ files

2. An operating system with APIs that support access to 2gb+ files

3. A version of Oracle using these APIs



Most of today's platforms already support large files, and they are typically used for 64bit api,oracle7.3 and later versions of these files, but depending on the platform, different operating systems and different Oracle versions still have a lot of difference. In some cases, the default is to support the "big file", but some other occasions may have to play some patches.



By the time this article was written, Oracle had some tools that were not updated to use the new APIs, such as the well-known export and Sql*loader, but again, the situation is different because of the platform and operating system.



Why use a 2gb+ file?



In this section, we try to summarize the advantages and disadvantages of using large files and devices ("large" files/devices) for Oracle data files.



Use the advantages of larger than 2Gb files:

• Oracle7 supports up to 1022 data files on most platforms. If the file is less than 2G, then that limits the size of the database to only less than 2044Gb. Of course, this is no longer a problem for Oracle8, which supports more data files (Oracle8 supports up to 1022 data files per tablespace).

L In reality, the maximum database size of Oracle7 is smaller than 2044Gb, because the general data files are stored in a separate tablespace, and many data files may be much smaller than 2Gb. Using large files allows the database to exceed the 2044Gb limit.

The use of large files means that fewer files are needed to manage smaller databases.

l need less file processing resources.



Disadvantages of using larger than 2Gb files:

L The unit of recovery is even bigger. A 2Gb file backup and restore, depending on the backup media and disk speed differences, it will take 15 minutes to 1 hours, then a 8Gb file will be 4 times times the time.

L The new ability to backup and restore parallel operations will be affected.

L will run into some platform-specific limitations, such as the possibility of a linear operation of asynchronous I/O over 2Gb.

L processing more than 2Gb of files may require a patch or some special configuration. There is also a greater risk than small files. For example, in some releases of AIX, more than 2Gb, asynchronous I/O will use linear operations.



Use the main points larger than 2Gb files:

L Confirm with the operating system provider whether the large file is supported and how to configure it.

L Confirm with the operating system provider, what is the real maximum file limit?

• Ask Oracle technical support to determine what patches or limitations are required for your existing platform, operating system version, Oracle version, or not?

Remember, if you really think about patching up your operating system or Oracle, check out the problems mentioned above.

L Confirm that the operating system limits are set correctly for all users who want to read with large files.

L Confirm that all backup scripts are capable of processing large files.

Note that there is a limit to the maximum file size for data files that use more than 2Gb. This limitation depends on your system platform and Oracle initialization parameter db_block_size. On most platforms (including UNIX, NT, VMS) file size limits are so large in 4194302*db_block_size.



The [note:112011.1] document describes the problems in changing the file size, especially when it is over 2Gb.



General points to note:

You need to be careful to set up automatic extensions of files. It is wise to limit the maximum size of the automatically expanded data file to less than 2Gb without using a "large file". Also note that because [bug:568232] will potentially define a maxsize value that exceeds the Oracle processing limit, this will cause an internal error after resize (error displayed as ORA-600 [3292])



On many platforms, the header of an Oracle data file contains additional blocks of data, so creating a 2Gb data file actually requires more disk space than 2Gb. The additional block size of the data file header on the UNIX platform is usually equal to the size of the db_block_size, but may require more space on the bare device.



2GB-related Oracle errors

The following errors may occur when the 2Gb limit arrives, and these errors are not in a particular order.

ORA-01119 Error in creating datafile xxxx

ORA-27044 unable to write header block of file

SVR4 error:22:invalid argument

ORA-19502 write error on file ' filename ', Blockno x (BLOCKSIZE=NN)

ORA-27070 Skgfdisp:async Read/write failed

ORA-02237 Invalid file size

Kcf:write/open error dba=xxxxxx block=xxxx online=xxxx file=xxxxxxxx file limit.

Unix Error, Efbig








Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.