Common heap Table deficiencies:
table Update has log overheadTable Delete Defectivetable record too large retrieval slowindex Back table read cost is largeorderly insertion difficult to read out in order
Delete produces the most undo, redo also the most, because undo also needs redo protection
Global temp Table:1 Efficient deletion of recordstransaction-based global temporary table commit or session connection exits after automatic deletionA callback-based global temporary table is automatically deleted after exiting the reply 2 for different session data independent, different sessions access the global temporary table, see different results Global Temp table during a program call execution, you need to empty the record and insert the record more than once, considering the use based on the invincible amount
Partition Table
--Partition Table DeletionAlter TableRange_part_tabtruncatepartition p9;--Partitioned Table ExchangeAlter TableRange_part_tab Exchange Partition P9 with Tablemid_table;--Partition Table SegmentationAlter TableRange_part_tab split partition P_max at (to_date ('2013-02-01','YYYY-MM-DD')) into(partition p2013_01,partition P_max);--Partitioned Table MergingAlter TableRange_part_table Merge Partitions P2013_01,p_max intoPartition P_max;
SCN, ensure consistent reading of data, solve the problem of read consistency, avoid the use of locks
Oracle Open and close process1 startup nomount locating parameter file (SGA Shared memory segment open, background process Open)2 ALTER DATABASE mount looking for a location control file (which contains data file log file checkpoint information, etc.)3 ALTER DATABASE open looking for location data file log file, etc.
closing is exactly the inverse of the opening process:all commands fused inside the shutdown immediate .database closed.database dismounted.Oracle instance shut down. where to find each file:show parameter spfile;show parameter control; sqlplus "/As SYSDBA"select file_name from Dba_data_files;select Group#,member from V$logfile;show parameter recovery;setlinesize;show parameter dump; Cd/home/oracle/admin/itmtest/bdumpLs-lart alert* OLTP tends to make blocks smaller in size: because if the block is too large, it can lead to a large number of concurrent queries and update operations pointing to the same data block, resulting in hotspot block contention. The Leaf mainly stores the key column value and the ROWID that can be located to the location of the data block .
Index Features:Very low heightThe Storage index column also has ROWIDin itself is orderly index optimization for MIN MAX: Index full SCAN (Min\max)
Select Max (object_id from t; Select Max,min from (selectmax(object_idMax from T) A, (selectmin(object_idmin from t) b;
index back to table read (table ACCESS by index ROWID)as
Select * from where object_id <= 5;
because it is a select * after querying the index column, you also need to return all other values of the query The index range scan is a range sweep implemented for this feature with a low index height, which is quite efficient when returning records rarely. index FAST Full scan scans the entire index, reading multiple index blocks at a timeThe index full scan scans the entire index, reading one index block at a time, which facilitates the sorting of the data, which is useful in count* situations, but logical reading increases
Union, which is combined with two result sets, excluding duplicate rows, and sorting the default rules;
Union all, which is set up for two result sets, including repeating rows, without sorting;
Primary FOREIGN Key:
1 The primary key itself is an index
2 can guarantee the uniqueness of the column in the table's primary key
3 can effectively limit the integrity of records for foreign key-dependent tables
If a table builds too many indexes, it can be slow to insert data. You can delete an index, insert it, and then build the index. Can optimize a large part of the time.
Too many indexes, the effect on three operations:
1 has the greatest effect on insert, and as long as there is an index, it slows down and more slowly.
2 to delete, there is good and bad, when the massive data delete less data, it is very useful. However, too many indexes can make the other indexes more expensive when they are updated.
3 Minimal impact on update.
Indexing causes a lock on the entire table so that the table is suspended and any operations cannot be performed.
Alter INDEX index name monitoring usage;
Select * from V$object_usage;--- Whether the query index is used alter index index name nomonitoring usage; -- Unlock Index monitoring
Bitmap index allows storage of null values (disadvantage, when inserting, the same index value is the same, is not plugged in)
Two criteria for creating a bitmap index: 1-bit graph index columns repeat a large number of 2 the table is rarely updated
Why bitmap indexes only apply to low cardinality values, but not for columns that are frequently updated.
The reason is that the Processed_flag column has only two values: Y and N. For records inserted into a table, the column value is n (which means that it is not processed). When the other process reads and processes this record, it updates the column value from N to Y. These processes are quick to find records with a processed_flag column value of N, so the developer knows that the column should be indexed. They learned elsewhere that bitmap indexes are suitable for low cardinality (low-cardinality) columns, so the so-called low cardinality column means that the column has only a few desirable values, so it looks like a bitmap index is a natural choice.
However, the root of all the problems is this bitmap index. With a bitmap index, a key points to multiple lines, possibly hundreds or more. If you update a bitmap index key, the hundreds of records that the key points to will be effectively locked together with the row you actually updated.
So, if someone inserts a new record (the Processed_flag column value is n), the n key in the anchor is locked, and this effectively locks another hundreds of records with a value of N of the Processed_flag column (the following is the N record). At this point, the process that wants to read the table and process the record cannot modify the n record to the Y record (the processed record). The reason is that to update this column from N to y, you need to lock the same bitmap index key. In fact, other sessions that want to insert new records into this table will also block because they also want to lock the bitmap index key. In a nutshell, developers implement a set of structures that allows only one person to insert or update at a time!
You can use a simple example to illustrate this situation. Here, using two sessions to demonstrate that blocking is easy to happen:
Ora10g> CREATE TABLE T (Processed_flag varchar2 (1)); Table created. Ora10g> Create bitmap index t_idx on T (processed_flag); index created. Ora10g> INSERT into t values (' N '); 1 row created.
Now, if you execute the following command in another Sql*plus session:
Ora10g> INSERT into t values (' N ');
This statement "hangs" until a commit is issued in the first blocking session.
"Interview abuse"--oracle knowledge finishing "Harvest, not only Oracle"