Welcome to the Oracle community forum and interact with 2 million technical staff. The term "Myth" refers to those of Oracle that have never been true or were once true, but now it is not the basic situation of real behavior. The root cause of most Oracle mythology is the result of a technology change. Most people think that many of today's Oracle myth lies in their
Welcome to the Oracle community forum and interact with 2 million technical staff> the term "Myth" refers to those of Oracle that have never been true or were once true, but now it is not the basic situation of real behavior. The root cause of most Oracle mythology is the result of a technology change. Most people think that many of today's Oracle myth lies in their
Welcome to the Oracle community forum and interact with 2 million technical staff> enter
The term "Myth" refers to the basic situation of Oracle's behaviors that have never been true or have been true, but are not true. The root cause of most Oracle mythology is the result of a technology change.
Most people think that many of today's Oracle myths are true in their time (for example, "Disk Load Balancing is very important to performance "), however, when both the hardware and Oracle software were improved, they all turned into myth.
We should not forget that Oracle technology has been around for more than 15 years, and the technology in 1989 is quite different from today's technology. Fortunately, most Oracle professionals fully understand the constant changes in Oracle mythology, and the correct advice on how to become no longer correct today has become a mythical lie.
Old Oracle myth
Many old Oracle technologies have been very useful in the past, but when the technology changes, it becomes a myth. The chaos is caused by thousands of Oracle stores running on old hardware that do not support the release of Oracle software. Let's look at some old myths.
Myth: objects are better executed in a single region
Oracle university taught the compress = y output option in the early 1990s s to improve the performance of the result table. Today, the locally managed tablespace (LMT) makes this suggestion no longer valid.
Myth: The data buffer hit rate should be kept above 90%.
This myth was also publicized by Oracle in the early 1990s S. At that time, almost all Oracle database I/O became a bottleneck, and the size of SGA was also limited by 32-bit server technology. Oracle-based products, such as SAP, also pointed out in their manuals that the data buffer hit rate should exceed 90%. Oracle author Robert Freeman prompted:
Many cases prove that everything is easy to prove. Assuming there is a basic piece of evidence, I can prove that the buffer hit rate is meaningless, or I can prove that it is the most important thing in the world.
I know that some scripts can run on your database to generate any data buffer hit rate you want. Does this look like a myth? Oracle does not seem to think so. The ratio-based recommendations form the foundation of Oracle's 10 Gb automatic memory management tool, as well as suggestions for v $ db_cache_advice.
There are still some Oracle myth-let's take a look.
Current Oracle myth
The modern Oracle myth is largely due to the replacement of Oracle technology, and some Oracle professionals cannot adapt to the change.
Myth: No need to separate indexes and tables
This myth was based on Oracle's suggestion in the early 1990s S, and the debate on disks was a major topic. Not long ago, the separation of indexes and tables in the database was considered a good solution and accepted as a way to improve performance.
Of course, this is partly because they are on the same disk and conflict with each other if they are not separated. Moving an index to a tablespace on an independent disk is separated from the table, not just separated to an independent tablespace, which usually improves performance.
The main argument, supported by the 10046 tracking of a single user system, is that the operation to access tables and indexes in a query is not asynchronous in nature, but a linear process. However, even in a single-user system, the requested head movement and disk latency related to read indexes and tables are not considered. In a multi-user environment, all the above factors are not taken into account, as well as the impact of multi-user access to the tables and indexes for collaborative positioning.
Now, when RAID is properly placed, there is no debate or move on many issues related to collaborative positioning. However, splitting tables and indexes into several tablespaces makes maintenance easier. The separation into discrete tablespaces makes it possible to track I/O rates and specific objects or object types, and allows users to use multiple sizes.
Myth: Frequently updated tables and indexes almost do not need to be reorganized.
This myth is caused by the statement issued by Oracle experts. He claims that Oracle indexes are always balanced and rebuilding will not bring much benefit to indexes. Let's take a look at this statement to some extent to help us understand how fragments are generated:
Unless you want to get stuck in endless organization, re-organization, organization, re-organization ...... To find the reason.
In the perfect world, you only need to build it once with absolutely correct parameters and never re-build it. I am afraid this will never happen in the real world. It's just as if you wanted to clean your room only once, and the room was filled with noisy teenage children-that's meaningless.
Today, it is a good thing to understand that tables and indexes are frequently inserted, updated, and deleted concurrently, it can quickly obtain the second-best structure and need to be reorganized to reduce I/O operations by multiple scan operations (using the dbms_redefinition package of Oracle, changing index movement/rebuilding, change the index joint, or even change the table movement based on availability requirements ). The concept of index balancing is divided into two forks. Tree B is always highly balanced. It can also become sparse or rotated to the right, so it becomes wider or load unbalanced.
Myth: multiple block sizes do not improve performance
This myth is immortal, because multiple block sizes were originally designed to support the deletable tablespace, at the same time, some people cannot see the other important benefits of multiple block sizes. The biggest benefit of different block sizes is that the restricted memory area (db_cache_size, db_32k_cache_size, etc.) is used more effectively) and Intelligent Isolation of objects that can reduce the number of logical I/O reads by multiple scans.
Today, Metalink prompts that multiple block size parameters are the most important part of Oracle performance adjustment, it also said that experts such as Robin Schumacher have proved that Oracle indexes can be used to build a more optimized B tree structure in a large block size. In addition, the size of db_cache_size can be reduced by reorganizing high DML indexes or reading small rows of data from a random single row (accessed by a unique index, and the PIO will be reduced because more blocks are suitable for the buffer size.
For example, some experiments try to use small, artificial single-user experiments to prove this assertion, and multiple block sizes cannot bring any benefit to real-world databases. However, stores in real life report a very different result about multiple block sizes and the 32 K block sizes used for indexing:
"I recently liked to pay attention to the 32 KB index issue: our client (200 GB +) saw a 20% reduction in I/O from this simple change ......", Steve Taylor, technical service manager of EMEA, said.
So here we see how the technological change has transformed a method that was originally effective 15 years ago into a mythical one, finally, I got an incorrect conclusion about the single-user test script. Similarly, due to technological changes, they will continue to create new modern mythology.
Oracle myth is being formed
After Oracle professionals observe different database behaviors and draw inconsistent conclusions, the myth continues.
We can also see that Oracle strongly recommends some advanced figures who put forward their ideas, but they have publicly misled others about Oracle's performance, thus creating a new Myth:
"Consistency is impossible and will not be affected by the settings of any optimizer ." They have an influence on how the optimizer handles things, but they have no influence on how things actually work.
Of course, changing the value of optimizer_mode, optimizer_index_cost_adj, and optimizer_index_caching can change the determination of whether the optimization personnel should perform a full scan or index access execution plan, this will also have a direct impact on the number of consistent queries.
At present, Oracle professionals are divided into two distinct groups, each of which has a completely different view of Oracle performance adjustment. Each group regards the other as the culprit of the continuous Oracle myth.
The myth of "rule of thumb"-many Oracle professionals believe that "rule of thumb" is very dangerous and understand that it can prove wrong if the rule of thumb is true, even in a single human testing, empirical rules are no longer scientific, and therefore useless.
"Script kiddies" myth-this myth is about running a single-user SQL * Plus script to "prove" the Oracle running method, which is almost always wrong in a multi-user database.