Managing Database Storage
1block=8192bytes
Case 1: Row migration
1. How the data in the table is stored
CREATE TABLE Test as SELECT * from Hr.employees;
CREATE INDEX idx_test on test (employee_id);
Look only at the execution plan, not the results.
Set Autotrace traceonly statistics;
SELECT * FROM Test where employee_id>0;
Forced Walk Index
Select/*+index (test,idx_test_id) */* from Test where employee_id>0;
Sql> Select/*+index (test,idx_test_id) */* from Test where employee_id>0;
107 rows selected.
Statistics
----------------------------------------------------------
7 Recursive calls
0 db Block gets
Consistent gets
0 physical Reads
0 Redo Size
9096 Bytes sent via sql*net to client
601 Bytes received via sql*net from client
9 Sql*net roundtrips To/from Client
0 Sorts (memory)
0 Sorts (disk)
107 Rows Processed
Interpretation: Consistent gets 19 blocks to save 107 rows of data
Set Autotrace off;
ALTER TABLE test Modify first_name varchar2 (1000);
ALTER TABLE test Modify last_name varchar2 (1000);
ALTER TABLE test Modify EMAIL varchar2 (1000);
ALTER TABLE test Modify Phone_number varchar2 (1000);
Update test set Last_name=lpad (last_name,1000, ' * '), First_name=lpad (first_name,1000, ' * '), Email=lpad (email,1000, ' * ') ), Phone_number=lpad (phone_number,1000, ' * ');
Sql> set Autotrace traceonly statistics;
Sql> Select/*+index (test,idx_test_id) */* from Test where employee_id>0;
107 rows selected.
Statistics
----------------------------------------------------------
9 Recursive calls
0 db Block gets
288 consistent gets
0 physical Reads
0 Redo Size
438255 Bytes sent via sql*net to client
601 Bytes received via sql*net from client
9 Sql*net roundtrips To/from Client
0 Sorts (memory)
0 Sorts (disk)
107 Rows Processed
Sql> set Autotrace off;
Sql> @/u01/app/oracle/product/11.2.0/db_1/rdbms/admin/utlchain.sql
Table created.
Where the table structure is:
CREATE TABLE Chained_rows (
Owner_name VARCHAR2 (30),
TABLE_NAME VARCHAR2 (30),
Cluster_Name VARCHAR2 (30),
Partition_name VARCHAR2 (30),
Subpartition_name VARCHAR2 (30),
Head_rowid rowID,
Analyze_timestamp Date
);
Put the test table row row migration information into this table chained_rows
Analyze table test list chained rows into chained_rows;
Select Table_name,count (*) from chained_rows GROUP by TABLE_NAME;
Sql> Select Table_name,count (*) from chained_rows GROUP by TABLE_NAME;
table_name COUNT (*)
------------------------------ ----------
TEST 105
This indicates that the TEST table has 107 rows of data, with 105 rows of data moving. A row migration has occurred.
How to eliminate row migration???
1. Place rows that have been migrated into a temporary table
CREATE TABLE Test_temp as SELECT * from Test where rowid in (select Head_rowid from chained_rows);
Sql> Select COUNT (*) from test_temp;
COUNT (*)
----------
105
2. Delete Row Migration rows
Select rowid,employee_id from Test;
Delete from Test where rowid in (select Head_rowid from Chained_rows)
sql> Delete from Test where rowid in (select Head_rowid from chained_rows);
Deleted rows.
Sql> Select COUNT (*) from test;
COUNT (*)
----------
2
3. Insert
INSERT INTO Test select * from Test_temp;
Sql> Select COUNT (*) from test; COUNT (*)----------107
Commit
TRUNCATE TABLE chained_rows;
See if there are row migrations
Analyze table test list chained rows into chained_rows;
Select Table_name,count (*) from chained_rows GROUP by TABLE_NAME;
Set Autotrace traceonly statistics;
Select/*+index (test,idx_test_id) */* from Test where employee_id>0;
The result is:
Sql> Select/*+index (test,idx_test_id) */* from Test where employee_id>0;
107 rows selected.
Statistics
----------------------------------------------------------
0 Recursive calls
0 db Block gets
Consistent gets
0 physical Reads
0 Redo Size
437625 Bytes sent via sql*net to client
601 Bytes received via sql*net from client
9 Sql*net roundtrips To/from Client
0 Sorts (memory)
0 Sorts (disk)
107 Rows Processed
Conclusion: The 288 block is now read only 116 blocks before comparing the migration.
Case 2: Row connection Description row of data more than 8192bytes
TRUNCATE TABLE chained_rows;
drop table Test_temp Purge;
drop table test Purge;
Set Autotrace off;
CREATE TABLE Test as SELECT * from Hr.employees;
CREATE INDEX idx_test_id on test (employee_id);
Set Autotrace traceonly statistics;
Select/*+ Index (TEST,IDX_TEST_ID) */* from Test;
Set Autotrace off;
ALTER TABLE test modify Last_Name varchar2 (2000);
ALTER TABLE test Modify First_Name varchar2 (2000);
ALTER TABLE test Modify email varchar2 (2000);
ALTER TABLE test Modify Phone_number varchar2 (2000);
Update test set Last_name=lpad (' 1 ', ' + ', ' * '), First_name=lpad (' 1 ', ' + ', ' * '), Email=lpad (' 1 ', ' a ', ' * '), phone_number= Lpad (' 1 ', 2000, ' * ');
Commit
sql> desc test;
Name Null? Type
----------------------------------------- -------- ----------------------------
employee_id Number (6)
First_Name VARCHAR2 (2000)
Last_Name not NULL VARCHAR2 (2000)
EMAIL not NULL VARCHAR2 (2000)
Phone_number VARCHAR2 (2000)
Hire_date not NULL DATE
job_id not NULL VARCHAR2 (10)
SALARY Number (8,2)
commission_pct Number (2,2)
manager_id Number (6)
department_id Number (4)
Set Autotrace traceonly statistics;
Select/*+ Index (TEST,IDX_TEST_ID) */* from Test;
107 rows selected.
Statistics
----------------------------------------------------------
Recursive calls
0 db Block gets
399 Consistent gets
0 physical Reads
0 Redo Size
869120 Bytes sent via sql*net to client
601 Bytes received via sql*net from client
9 Sql*net roundtrips To/from Client
0 Sorts (memory)
0 Sorts (disk)
107 Rows Processed
Set Autotrace off;
Analyze table test list chained rows into chained_rows;
Select Table_name,count (*) from chained_rows GROUP by TABLE_NAME;
Sql> Select Table_name,count (*) from chained_rows GROUP by TABLE_NAME;
table_name COUNT (*)
------------------------------ ----------
TEST 214
So how do you eliminate line connections?
Sql> Show Parameter 16k
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
Db_16k_cache_size Big integer 0
alter system set db_16k_cache_size=20m;
Select name from V$datafile;
Sql> select name from V$datafile;
NAME
--------------------------------------------------------------------------------
+data/orcl/datafile/system.256.943301251
+data/orcl/datafile/sysaux.257.943301251
+data/orcl/datafile/undotbs1.258.943301251
+data/orcl/datafile/users.259.943301251
+data/orcl/datafile/example.265.943301433
Create a tablespace of 16K (previously the default of 1 blocks is 8192bytes bytes)
Create tablespace tbs_16k blocksize 16K datafile ' +data/orcl/datafile/tbs.dbf ' size 10m;
ALTER TABLE test move tablespace tbs_16k;
TRUNCATE TABLE chained_rows;
Analyze table test list chained rows into chained_rows;
Select Table_name,count (*) from chained_rows GROUP by TABLE_NAME;
Because the table space has just been made a move, the index of test takes effect and requires rebuilding the index.
Alter index IDX_TEST_ID rebuild;
Set Autotrace traceonly statistics;
Select/*+ Index (TEST,IDX_TEST_ID) */* from Test;
Sql> Select/*+ Index (TEST,IDX_TEST_ID) */* from Test;
107 rows selected.
Statistics
----------------------------------------------------------
Recursive calls
0 db Block gets
177 consistent gets
0 physical Reads
0 Redo Size
867337 Bytes sent via sql*net to client
601 Bytes received via sql*net from client
9 Sql*net roundtrips To/from Client
0 Sorts (memory)
0 Sorts (disk)
107 Rows Processed
Conclusion: Read 107 rows of data, read 399 blocks before comparison, and now read only 177 blocks
This article is from the "Liang blog" blog, make sure to keep this source http://7038006.blog.51cto.com/7028006/1930684
10. Manage Database storage (row migration and row connection)