Oracle Database Management Experiment
1. Transfer tablespace
2. Create a partition table and a partition Index
3fga fine-grained Audit
Iv. Monitoring Index usage
5. Create a table with special fields
6. Flashback flash back Technology
1. migrate the tstrans tablespace of the ocmdb database to the ocmdb02 database.
Scenario: 1. Suitable for OLAP data warehouse applications 2. faster data migration than exp/imp, not only migrating data while migrating metadata
Principle: export only the metadata of the tablespace (that is, the structure information), import it to the target database, set the tablespace to read-only, and copy the corresponding data files to the target directory of the target system, for the tablespace imported on the mount, set the tablespace to read/write.
Official documentation: Administrator's Guide-> 8 Managing Tablespace-> Transport Tablespace Between databases
SQL Reference-> CREATE DIRECTORY
Clean up the environment
Drop user tstrans cascade;
Drop tablespace tstrans including contents and datafiles;
1. initialize the experiment environment
Sqlplus sys/sys @ ocmdb1521 as sysdba
Create tablespace tstrans datafile '/u01/oracle/oradata/ocmdb/tsport01.dbf' size 20 m autoextend off;
Create user tstrans identified by tstrans default tablespace tstrans;
Grant connect, resource to tstrans;
Conn tstrans/tstrans
Create table t1 (a int) tablespace tstrans;
Insert into t1 values (100 );
Commit;
Create index idx_t1 on t1 (a) tablespace tstrans;
Select index_name, table_name, tablespace_name from user_indexes where table_name = 't1 ';
Conn/as sysdba
Select tablespace_name, segment_name, segment_type from dba_segments where tablespace_name = 'tstrans '; view the objects in the TSTRANS tablespace
2. Check whether the tstrans tablespace violates the constraints.
Exec dbms_tts.transport_set_check ('tstrans ', true );
For example, if the table is in Table A and the index is in Table B, if only A is transmitted, the system will violate the self-contained constraints, and the index on the table will become invalid and the transmission is incomplete. solution: two tablespaces A and B are transmitted simultaneously.
Select * from transport_set_violations; view the list of violation constraints, no record is normal
3. Set TSTRANS to a read-only tablespace, that is, all data files are set to read-only.
Alter tablespace tstrans read only;
4. Use the transport_tablespace = y parameter of exp or expdp to export the table space metadata, that is, the structure information.
Exp-help
Exp userid = \ '/as sysdba \' transport_tablespace = y tablespaces = tstrans file =/home/oracle/exp_tstrans.dmp
You can use SYS to export only the metadata (structure information) of the tstrans tablespace, instead of the actual data, so the capacity is relatively small.
Scp exp_tstrans.dmp ocm02:/home/oracle transfer to second Machine
Data Pump export method selected
[Expdp-help
Create directory dir_home as '/home/oracle ';
Grant read, write on directory dir_home to public;
Expdp system/oracle directory = dir_home dumpfile = expdp_tstrans.dmp transport_tablespaces = tstrans transport_full_check = y
]
5. Copy data files (real data in the tablespace). Only data files in read-only status can be copied without stopping the database.
Scp tstrans01.dbf ocm02:/u01/oracle/oradata/ocmdb02/
6. Import the transfer tablespace of imp or impdp
LEO2 database preparation
Create user tstrans identified by tstrans;
Grant connect, resource to tstrans;
By default, the imp import method inserts table space metadata + data files in append mode.
Imp userid = \ '/as sysdba \' file =/home/oracle/exp_tstrans.dmp fromuser = tstrans touser = tstrans transport_tablespace = y tablespaces = tstrans datafiles =/u01/oracle/oradata/ocm02 /tstrans01.dbf
[Select the import method using impdp]
Create directory dir_home as '/home/oracle ';
Grant read, write on directory dir_home to public;
Impdp system/oracle directory = dir_home dumpfile = expdp_tstrans.dmp remap_schema = (tstrans: tstrans) object load from one schema to another schema
Transport_datafiles =/u01/oracle/oradata/ocm02/tstrans01.dbf which data file to import
Check whether the space has been imported
Col tablespace_name for a15
Col segment_name for a15
Col segment_type for a15
Select tablespace_name, segment_name, segment_type from dba_segments where tablespace_name = 'tstrans ';
Conn tstrans/tstrans
Select * from t1;
7. Adjust the tablespace of the ocmdb instance and ocmdb02 instance to read/write status.
Sqlplus sys/sys @ ocmdb1521 as sysdba
Conn/as sysdba
Select tablespace_name, status from dba_tablespaces;
Alter tablespace tstrans read write;
Select tablespace_name, status from dba_tablespaces;
Sqlplus sys/sys @ ocmdb021521 as sysdba
Conn/as sysdba
Select tablespace_name, status from dba_tablespaces;
Alter tablespace tstrans read write;
Select tablespace_name, status from dba_tablespaces;
2. Create a partition table and a partition Index
Official documentation:
Administrator's Guide-> 17 Managing Partitioned Tables and Indexs
Data Warehousing Guide-> 5 Partitioning in Data Warehouses
Occasion: large data volumes require a small search range and high efficiency
Advantages: High DBA management flexibility, partition-based deletion, insertion
Disadvantage: Cross-partition retrieval is inefficient, but you can create a global index to improve performance.
Global index: If a partition is deleted by default, the global index becomes invalid. A partition table has only one global index.
Local index: one partition and one index. Several partitions have several indexes.
Requirement: Create a partition table with four partitions. Each partition uses one tablespace independently.
Non-standard block, block size 16 k
1. Set non-standard Blocks
Ocm01
Alter system set db_16k_cache_size = 80 M; set a non-standard block 16 K buffer, used to store non-standard Blocks
Show parameter db_16k_cache_size
Purpose: it is used to reduce the number of physical I/O reads and writes. The data that can be read for the first 100 times is now completed after 50 reads.
Create user ocm01 identified by ocm01;
Grant dba to ocm01;
2. Create four tablespaces. One partition corresponds to one tablespace.
Conn ocm01/ocm02
Drop table t2_part;
Drop index idx_t2_part;
Drop tablespace part1 including contents and datafiles;
Drop tablespace part2 including contents and datafiles;
Drop tablespace part3 including contents and datafiles;
Drop tablespace part4 including contents and datafiles;
Create tablespace part1 datafile '/u01/oracle/oradata/ocmdb/disk1/part1_01.dbf' size 50 M
Extent management local
Blocksize 16 k;
Create tablespace part2 datafile '/u01/oracle/oradata/ocmdb/disk2/part2_01.dbf' size 50 M
Extent management local
Blocksize 16 k;
Create tablespace part3 datafile '/u01/oracle/oradata/ocmdb/disk3/part3_01.dbf' size 50 M
Extent management local
Blocksize 16 k;
Create tablespace part4 datafile '/u01/oracle/oradata/ocmdb/disk4/part4_01.dbf' size 50 M
Extent management local
Blocksize 16 k;
Select * from v $ tablespace;
2. Construct partitioned table data
Conn tstrans/tstrans
Drop table t2 purge;
Create table t2 (itemid number (10), name varchar2 (10), itemdate date );
Create index idx_t2 on t2 (itemid );
Insert into t2 values (1, 'apple1', to_date ('2017-02-01 ', 'yyyy-mm-dd '));
Insert into t2 values (2, 'apple2', to_date ('2017-03-01 ', 'yyyy-mm-dd '));
Insert into t2 values (3, 'apple3', to_date ('2017-04-01 ', 'yyyy-mm-dd '));
Insert into t2 values (4, 'apple4', to_date ('2017-05-01 ', 'yyyy-mm-dd '));
Insert into t2 values (5, 'apple5', to_date ('2017-06-01 ', 'yyyy-mm-dd '));
Insert into t2 values (6, 'apple6', to_date ('2017-07-01 ', 'yyyy-mm-dd '));
Insert into t2 values (7, 'apple7', to_date ('2017-08-01 ', 'yyyy-mm-dd '));
Insert into t2 values (8, 'apple8', to_date ('2017-09-01 ', 'yyyy-mm-dd '));
Insert into t2 values (9, 'apple9', to_date ('2017-10-01 ', 'yyyy-mm-dd '));
Insert into t2 values (10, 'apple10', to_date ('2017-11-01 ', 'yyyy-mm-dd '));
Commit;
Select * from t2; Insert 10 records, displayed
3. Create a partition table under the ocm01 user
Conn ocm01/ocm01
Create table t2_part
Partition by range (itemdate)
(PARTITION p1 values less than (to_date ('1970-01-01 ', 'yyyy-mm-dd '))
TABLESPACE part1,
PARTITION p2 values less than (to_date ('1970-01-01 ', 'yyyy-mm-dd '))
TABLESPACE part2,
PARTITION p3 values less than (to_date ('1970-01-01 ', 'yyyy-mm-dd '))
TABLESPACE part3,
PARTITION p4 values less than (to_date ('1970-01-01 ', 'yyyy-mm-dd '))
TABLESPACE part4,
PARTITION other values less than (maxvalue)
TABLESPACE part4)
As select * from tstrans. t2;
View Partition Table Data
Alter session set nls_date_format = 'yyyy-mm-dd hh24: mi: ss ';
Select * from t2_part;
View a partition
Select * from t2_part partition (p1 );
4. Create a hash global partition Index
Hash partition index: stores data after even partitioning, uses a column as hash, and distributes data evenly across four partitions. Each partition has a similar number of records in different tablespaces, concurrent read and write
Official documentation: SQL Reference-> CREATE TABLE and CREATE INDEX
Conn ocm01/ocm01
Note: The global partition index (hash Based on the name column) is evenly divided into four parts, each of which is saved in the LEOINDEX tablespace.
Create unique index idx_t2_part on t2_part (name, itemid) global partition by hash (name) partitions 4 tablespace LEOINDEX parallel 4;
Select index_name, index_type, table_name from user_indexes where table_name = 't2 _ part ';
To truncate a partition, make sure that the global index is available.
1. By default, adding or deleting partitions> the global index becomes invalid.
2. The truncate operation will affect the global index delete operation and will not affect the global index
Alter table t2_part truncate partition p1 update global indexes;
Select * from t2_part partition (p1 );
[Update global indexes]
Check whether the global index is valid
Select index_name, status, partitioned from dba_indexes where table_name = 't2 _ part ';
3fga fine-grained Audit
Occasion: you can find out who will use the statement to delete the table.
Oracle fine-grained audit is a branch of the security field. It can trace the history operations of databases to ensure that all operations are safe, reliable, and controllable. FGA is implemented based on packages.
Official documentation: Security Guide-> 12 logging and Administering Auditing-> Fine-Grained Auditing
PL/SQL Packages and Types Reference-> 40 DBMS_FGA
Requirement: Use the FGA technology to audit the table
1. Clean up the environment
Only Administrators can delete audits.
DBMS_FGA.DROP_POLICY (
Object_schema VARCHAR2, user name (if it is null, the current login user is used by default)
Object_name VARCHAR2, name of the object to be audited (table name)
Policy_name VARCHAR2); audit name (must be a unique value)
Execute DBMS_FGA.DROP_POLICY (object_schema => 'occm01', object_name => 'T', policy_name => 'audit _ t ');
Conn ocm01/ocm01
Drop table t purge; Delete the table to be audited
2. Create a T table to be audited
Conn ocm01/ocm01
Create table t (x number (10), y varchar2 (20); create table t to be audited
3. Create an Audit Policy
Conn/as sysdba
Begin
Dbms_fga.add_policy (
Object_schema => 'occm01', who audits
Object_name => 'T', who audits the table?
Policy_name => 'audit _ t', audit Policy Name
Audit_condition => 'x> = 100 ', the condition for triggering audit x> = 100
Audit_column => 'x', Which column in the audit table is 'x, y'
Enable => TRUE, the audit takes effect immediately
Statement_types => 'insert, UPDATE, DELETE, select'); the statements that trigger the audit start the audit for these statements.
End;
/
4. Check whether the FGA policy is effective.
Col object_schema for a20
Col object_name for a15
Col policy_name for a13
Col enabled for a3
Select object_schema, object_name, policy_name, enabled from dba_audit_policies;
5. Insert test records to meet audit Triggering Conditions
Conn leo1/leo1
Insert into t values (10, 'first ');
Insert into t values (100, 'dfs ');
Insert into t values (200, 'dsf ');
Insert into t values (300, 'sdf ');
Insert into t values (400, 'sdg ');
Insert into t values (500, 'sdg ');
Insert into t values (600, 'sdgsdg ');
Select * from t; view audit table content
6. View audit results. By default, the audit results are stored in the SYS. FGA_LOG $ base table.
Note: The audit checks the INSERT, UPDATE, DELETE, and SELECT statements, and records the statements executed by which table is operated.
Select OBJ $ SCHEMA, OBJ $ NAME, POLICYNAME, LSQLTEXT from SYS. FGA_LOG $;
Set lines 200
Col SQL _text for a35
Col object_schema for a15
Select object_schema, object_name, policy_name, SQL _text from dba_common_audit_trail;
Show all audit results
Iv. Monitoring Index usage
Occasion: useless indexes in the monitoring table are deleted
Official documentation: Administrator's Guide-> 16 Managing Indexes-> Monitoring Index Usage
Conn ocm01/ocm01
Drop table t4;
Create table t4 as select * from dba_objects;
Create index idx_t4 on t4 (object_id );
Enable monitoring of idx_t4 indexes under LEO1
Alter index idx_t4 monitoring usage;
Stop monitoring the idx_t4 index in LEO1
Alter index idx_t4 nomonitoring usage;
Select * from t4 where object_id = 5000;
View the usage of indexes in the v $ object_usage view.
Set linesize 400 set the environment
Col index_name for a10
Col table_name for a10
Col start_monitoring for a20
Col end_monitoring for a20
Select * from v $ object_usage;
Note: If the monitoring field is YES, this index has been monitored and NO is not monitored.
The used field is YES, indicating that the index has been used and NO is used.
Start_monitoring and end_monitoring indicate the last monitoring interval
5. Create a table with special fields
Create a table with ROWID and timestamp fields and insert data
Official documentation: SQL Reference-> 2 Basic Elements of Oracle SQL-> Datatypes-> search for "ROWID" and "TIMESTAMP" WITH LOCAL TIME ZONE Datatype
1. Create the LEONARDING_R table and initialize the data.
Conn ocm01/ocm01
Construct a data environment
Create table ocm01_text
(
Text1 varchar2 (10 ),
Text2 varchar2 (10 ),
Text3 date,
Text4 varchar2 (50)
);
Insert a record containing the 'Hugh 'keyword
Insert into ocm01_text values ('Hugh ', 'name', sysdate, 'Hugh ');
Insert into ocm01_text values ('hugh2', 'name', sysdate, 'Hugh ');
Insert into ocm01_text values ('hugh3', 'name', sysdate, 'Hugh ');
Insert into ocm01_text values ('hugh4', 'name', sysdate, 'Hugh ');
Commit;
Select * from ocm01_text;
Create table ocm01_r (text rowid, insert_time timestamp with local time zone) tablespace users;
Rowid Field Type
Timestamp with local time zone timestamp and local time zone Field Type
2. insert records into the LEONARDING_R table
Retrieve records in the leonarding_text table. If one record contains three or more "Leonarding" keywords, insert the rowid and timestamp of this record into the leonarding_r table.
Insert into ocm01_r (text, insert_time) select rowid, current_timestamp from ocm01_text;
Note: The current_timestamp function returns the "date" and "time" converted based on the time zone. The returned Second is the system's
Sysdate function: returns the date and time of the operating system.
Length String length function: obtains the length of a field.
Commit;
Select * from ocm01_r;
Drop table ocm01_r;
6. Flashback flash back Technology
Scenario: how to restore data when deleted by mistake
Official documentation: Application Developer's Guide-Fundamentals-> 10 Developing Flashback Application-> Using Flashback Query (SELECT... As)
1. Flashback Query flashes back to Query data
Principle: flashback query uses the pre-image stored in the undo tablespace
Construct Environment
Drop table t5 purge;
Create table t5 (x int );
Insert into t5 values (1 );
Insert into t5 values (2 );
Insert into t5 values (3 );
Commit;
Select * from t5;
2. Construct the current time and scn number for the subsequent flash Query
Alter session set nls_date_format = 'yyyy-mm-dd hh24: mi: ss ';
Query the current system time
Select sysdate from dual;
SYSDATE
-------------------
21:27:23
Query the current system SCN number
Select dbms_flashback.get_system_change_number from dual; 331299
3. delete data
Delete from t5 where x = 1;
Commit;
Select * from t5;
4. Create a view in two ways to construct a flashback query to query the deleted data
1) Method 1: Use timestamps to construct the flashback query view.
Create view v_t5_timestamp as select * from t5
As of timestamp to_timestamp ('2017-01-14 21:27:23 ', 'yyyy-mm-dd hh24: mi: ss ');
2) Method 2: Use SCN to construct a flashback query view
Create view v_t5_scn as select * from t5 as of scn 331299;
Note: scn is more accurate than timestamp
View Flash Content Query
Select * from v_t5_timestamp;
Select * from v_t5_scn;
Drop view v_t5_timestamp;
Drop view v_t5_scn;
At this point, both methods for constructing views have successfully obtained the data for Flashback query.
2. A table is deleted multiple times and must be restored to the specified data version.
Principle: Clear the recycle bin
Purge recyclebin;
Create table t6 (x int );
Insert into t6 values (1 );
Commit;
Select * from t6;
Drop table t6;
Create table t6 (x int );
Insert into t6 values (1 );
Insert into t6 values (2 );
Commit;
Select * from t6;
Drop table t6;
Query the recycle bin Data Dictionary
Select object_name, original_name, type from recyclebin;
Show recyclebin
Obtain which of the two tables whose t6 table is dropped is the object we need to recover and restore the t6 table with one record.
Select * from "BIN $7 + 8KsHBFHoPgQKjAZRUk9w = $0 ";
Select * from "BIN $7 + 8KsHBEHoPgQKjAZRUk9w = $0 ";
Flash back to the specified version-> flash back to rename at the same time
Flashback table "BIN $7 + 8KsHBEHoPgQKjAZRUk9w = $0" to before drop rename to t6_new;
Drop table t6_new;
Summary from ocm experiment Selection