Scenario 4 Data Warehouse Management DW

Source: Internet
Author: User

Scenario 4 Data Warehouse Management DW

Parallel 4 100%-> must obtain a specified 4 degree of parallelism, if the number of processes obtained is less than the number of degrees of parallelism set, the operation fails

Parallel_min_percent: If set to 100, as above

ILM: Information Lifecycle Management

High compression of dormant data on low-cost channels (e.g. tape drives)

Low-volume data compression on low-access devices

High-access data is not compressed and placed on highly efficient devices

Partitions (official documents VLDB and Partioning Guide)

The single-sheet capacity is larger than 2G, it is recommended to partition

1. Each partition is a separate segment, stored separately in the Tablespace (TBS disaster prevention on one isolated storage) I/O

2. Manage backup Add new partition, delete, truncate, split, merge, table Exchange data

Secure Password Support

1. Case-sensitive

Eg:

Alter user Scott identified by Scott;

Alter user Scott identified by 1;

Alter user Scott identified by Tiger;

eg

Desc dba_profiles

SELECT * from Dba_profiles;

SELECT * from Dba_profiles where profile= "DEFAULT";

Eg:

Cd/u01/app/oracle/rdbms/admin

Ls

!sql

@?/rdbms/admin/utlpwdmg.sql

SELECT * from Dba_profiles where profile= "DEFAULT";

Alter profile default limit password_verify_function verify_function_11g;

Alter profile default limit password_verify_function null; (Cancellation of password complexity verification)

Show Paramter sec

Eg:

Conn/as SYSDBA

Alter profile default limit FAILED_LOGIN_ATTEMPTS3;

SELECT * from Dba_profiles where profile= "DEFAULT";

Show parameter sec

Conn SCOTT/T1

Conn Scott/t2

Conn SCOTT/T3

Eg: Manually unlock Scott users

Conn/as SYSDBA

Desc dba_users;

Select Username, account_status from dba_users;

Alter user Scott identified by Tiger account unlock;

Transparent data encryption (TDE)

1. Table Space Encryption

2. Logminer Log Mining

3. Support the logical Standby library

4. Support Flow

5. Support Asynchronous data Change crawl

6.

3des:3 * 56 = 168

aes:128

Eg: Enable TDE

CD $ORACLE _home/network/admin (encryption_wallet_location:u01/app/oracle/product/11.2.0/db_1/network/admin)

Ls-l

Cat Sqlnet.ora

U01/app/oracle/admin/prod/wallet

Alter system set encryption key identified by "<password>";

Select Tablespace_name, encrypted from Dba_tablespace;

LOB implementation Large Object implant

LOB: Large objects (large objects are stored separately in another table space, and tables are not in the same tablespace)

CLOB: Large text, such as logs (stored in a database)

BLOB: unstructured (binary) slices, video, music (stored in database)

BFILE: Stored in the OS in file mode

VARCHAR2 4000 Characters

Book: In-depth understanding of Oracle 12c database management

Eg:

CREATE TABLE Lob1 (ID number, Desc_comm clob) tablespace users LOB (DESC_COMM) store as basic file; (creating a table containing large objects)

Select Segment_name, Segment_type, tablespace_name from User_segments;

Show parameter Secur

Eg:

Desc dba_tablespaces;

Select Tablespace_name, Extent_management, segment_space_management from Dba_tablespaces;

Large objects are not cached by default, resulting in large I/O overhead

Securefile to enable caching

Securefile Compression

Low: fast read/write

Medium:default

High

OCM Exam Score: Recording frequency, background data acquisition

Table space for large objects must be ASSM automatic segment space management

Raw Bare Device No cache

Basicfile migration to Securefile: online redefinition (does not affect access to tables)

INSERT INTO SELECT * from CLOB;

Logminer: Log Mining

Flash Back Data archive

Flashback query (only for Dml,undo segments)

Flashback table (for Dml,undo segment only)

Flashback version query (undo segment)

Flashback Transaction query (undo segment)

Flashback drop

Flashback Archive (Flashback Data Archive, for DML only, saves all DML operations on the table to a single table archive, which can be used to persist data changes, complementing the flashback query and flash back table)

Flashback database (Imcomplete recovery is not fully recovered, restores the databases to a point in time in the past)

User error Operation Recovery

Physical Backup: datafile, control file, redo log

Media Recovery:

1. Restore (Backup recovery)

2. Recover

TSPITR: Non-full recovery of time points based on table space

Eg:

Select Tablespace_name, Extent_management, segment_space_management from Dba_tablespaces;

Select Name, flashback_on from V$databasel

Show parameter recover

Eg:

1. Set up flashback dedicated table space

2. Establishment of Flashback archive

3. Modify Flashback Archive Properties

4. View Flashback Archive

Eg:

Select name from V$datafile;

Create tablespace ftbs1 datafile '/u01/app/oracle/oradata/prod/ftbs1.dbf ' size 100m autoextend on maxsize 2g;

Eg:

Create Flashback archive Farch1 tablespace Flash_tbs Quota 20m retention 1 year;

Eg:desc dba_flashback_archive;

Select Flashback_archive_name, Retention_in_days, status from Dba_flashback_archive;

Col Flashback_archive_name for A20

Alter FLASHBACK archive FARCH1 set default;

Enable flashback archive on the table:

Eg:

Grant Flashback archive on farch1to Scott;

Conn Scott/tiger

SELECT * from tab;

CREATE TABLE EMP1 as SELECT * from EMP;

SELECT * from EMP1;

INSERT INTO EMP1 select * from EMP1;

Commit

INSERT INTO EMP1 select * from EMP1;

Commit;insert to EMP1 select * from EMP1;

Commit

Select COUNT (*) from EMP1;

ALTER TABLE EMP1 Flashback archive;

SELECT * from tab;

SELECT * from EMP;

Delete from EMP;

Commit

Set Autotrace on

SELECT * from emp as of timestamp to_timestamp (' 2016-07-24 15:30:51 ', ' yyyy-mm-dd hh:mm:ss ');

Eg:

Show parameter Undo

Create undo tablespace undotbs2 datafile '/u01/app/oracle/oradata/prod/undotbs2.dbf ' size 100m;

Show parameter Undo

alter system set UNDO_TABLESPACE=UNDOTBS2;

Drop tablespace undotbs1 including contents and datafiles;

SELECT * from emp as of timestamp to_timestamp (' 2016-07-24 15:30:51 ', ' yyyy-mm-dd hh:mm:ss ');

ora01555: The snapshot is too old to access the undo block when the data is no longer in

Eg:

Select COUNT (*) from EMP1;

Delete from EMP1;

Commit

SELECT * from EMP1;

SELECT * from EMP1 as of timestamp to_timestamp (' 2016-07-24 15:21:45 ' yyyy-mm-dd hh:mm:ss);

Select Sysdate from

SELECT * from tab;

Set Autotrace on

SELECT * from EMP1 as of timestamp to_timestamp (' 2016-07-24 15:21:45 ' yyyy-mm-dd hh:mm:ss);

Note:quota 20m: Up to 20 MB per table

Eg:

SELECT * from EMP1 as of timestamp to_timestamp (' 2016-07-24 15:18:45 ' yyyy-mm-dd hh:mm:ss);

Eg:

INSERT INTO Emp1select * from EMP1 as of timestamp to_timestamp (' 2016-07-24 15:21:45 ' yyyy-mm-dd hh:mm:ss);

Commit

INSERT INTO EMP select * from EMP1;

ALTER TABLE EMP disable novalidate constraint pk_emp;

INSERT INTO EMP select * from EMP1;

SELECT * from EMP;

ALTER TABLE EMP Flashback archive;

INSERT INTO EMP SELECT * from emp where rownum < 11;

Select COUNT (*) from EMP;

Commit

Set Autotrace Trace

SELECT * from emp as of timestamp to_timestamp (' 2016-07-24 15:40:42 ', ' yyyy-mm-dd hh:mm:ss ');

Eg: Clear Sky buffer cache

alter system flush Buffer_cache;

/

Create tablespace undotbs1 file ' u01/app/oracle/oradata/prod/undotbs1.dbf ' size 100m autoextend on;

Alter system set UNDO_

drop table emp1;

TRUNCATE TABLE EMP1;

SELECT * from emp as of timestamp to_timestamp (' 2016-07-24 15:46:42 ', ' yyyy-mm-dd hh24:mi:ss ');

SELECT * from emp as of timestamp to_timestamp (' 2016-07-24 15:45:42 ', ' yyyy-mm-dd hh24:mi:ss ');

INSERT INTO EMP 1 SELECT * from EMP1 where rownum<4;

Commit

SELECT * from EMP1;

INSERT INTO EMP 1 SELECT * from EMP1;

drop table emp1;

TRUNCATE TABLE EMP1;

SELECT * from EMP1;

SELECT * from emp as of timestamp to_timestamp (' 2016-07-24 15:53:42 ', ' yyyy-mm-dd hh24:mi:ss ');

Interval Partition interval Partitioning

1. Extension of range partition

2. If the inserted data does not belong to the partition that is already divided, the new partition can be added automatically

3. At least one range partition is built

Eg:

Select name from V$datafile;

Create tablespace tbs1 datafile '/u01/app/oracle/oradata/prod/tbs01.dbf ' size 50m;

Create tablespace tbs2 datafile '/u01/app/oracle/oradata/prod/tbs02.dbf ' size 50m;

Create tablespace tbs3 datafile '/u01/app/oracle/oradata/prod/tbs03.dbf ' size 50m;

Su-oracle

Conn Scot/tiger

Alter user Scott identified by Tiger account unlock;

Clone reply

CREATE TABLE Pt1 (sales_amt number, d_date date) partition by range (d_date) interval (numtoyminterval (1, "year")) store in (TBS1, TBS2, TBS3) (Partition1 values less than (To_date (/01-01-2013 ', ' dd-mm-yyyy ')) tablespace tbs1;

Insert into Table pt1 values (1, sysdate+1000);

Commit

Desc user_segments

Col Segment_name for A20

Select Segment_name, Partition_name, Segment_type, Tablespace_name, bytes/1024 from user_segments where Segment_name= ' PT1 ';

R

SELECT * from PT1;

Select Segment_name, Partition_name, Segment_type, Tablespace_name, bytes/1024 from user_segments where Segment_name= ' PT1 ';

SELECT * FROM PT1 partition (SYS_P41);

ALTER TABLE PT1 Rename partition SYSP41 to P2; (Change auto-assigned partition sysp41 named p2)

If there is no interval partition, you will get an error when inserting data that is not part of the partition

System partition

1. Open the application-controlled partition on the selected table

2. There is an application to control the partition and data location

3. Do not apply partition keys like other partitioning methods

4. Partition pruning is not supported

5. Data can be inserted into the specified partition

Eg:

Set Autotrace on

SELECT * from pt1 where sales_amt=1;

Analyze table pt1 Compute statistics;

SELECT * from pt1 where sales_amt=2;

SELECT * from pt1 where d_date= ' 01-oct-12 ';

R

Poor performance if frequently accessed data is cross-partitioned

Eg:

Crate table Apps (app_id number, app_amnt number) partition by sytem (partition P1, partition P2, partition P3);

Set autotrace on;

Select Segment_name, Partition_name, Segment_type, Tablespace_name, bytes/1024 from user_segments where Segment_name= ' APPS ';

Insert INTO Apps Partiton (p3) VALUES (10, 1000);

Commit

Select Segment_name, Partition_name, Segment_type, Tablespace_name, bytes/1024 from user_segments where Segment_name= ' APPS ';

Set Autotrace off

Show Parameter seg

alter system set DEFFERED_SEGMENT_CREATION=FALSE; (Delay segment, empty table also allocates storage space)

Insert INTO Apps Partiton (p1) VALUES (10, 1000);

Commit

SELECT * from apps partition (P1);

SELECT * from apps partition (P3);

Combining partitions

1. Range-to-range partitioning

2.

3.

4.

Virtual Column Partitioning

1. Virtual column values are generated by a function or an expression

2. You can create a virtual column when you create or modify a table

3. Do not consume storage

4. You can create an index for a virtual column

5. Create partitions and collect statistics on virtual columns

Eg:

CREATE TABLE TMP1 (emp_id number, sal number, comm_pct number, commission generated always as (sal*comm_pct)) partition by Range (Commission) (partition P1 values less than (+), partition P2 values less than (+), partition p3 values less th An (MaxValue));

Desc TMP2;

Insert into TMP1 (emp_id, SAL, comm_pct) VALUES (10, 5000, 10);

SELECT * from TMP1;

Set Autotrace on

SELECT * from TMP1 where emp_id = 10;

SELECT * FROM TMP1 where commission = 50000;

Insert into TMP1 (emp_id, SAL, comm_pct) VALUES (10, 5000, 0.1);

Commit

SELECT * FROM TMP1 where commission = 500;

Reference partitioning

1. Referential constraints

2. partition key exists between parent table and child table

3. Forcing the primary foreign key constraint on the partition key

4. Child tables inherit partitions on the parent table

5. Partitions can be automatically maintained (when a parent table adds a new partition, the child table also adds a new partition) any partition maintenance operations on the parent table are automatically synchronized to the child table

Eg:

CREATE TABLE orders (ord_id number, ord_date date, constraint ORD_PK primary key (ORD_ID) partition by Range (ord_date) (par Tition P11 values less than (...), partition P12 values less than (), partition Pmax values less than ());

CREATE TABLE Ord_items (line_id number, ord_id number not NULL, SKU number, quantity number, constraint Ord_items_pk prima Ry Key (line_id, ord_id), Constraint ORD_ITEMS_FK1 foreign key (ord_id) references orders (ORD_ID)) partition by reference (O RD_ITEMS_FK1);

Desc user_segments

Select Segment_name, Partition_name, Segment_type from User_segments where segment_name in (' ORDERS ', ' ord_items ');

INSERT into Ord_items values (10, 100, 1000, 100);

Commit

SELECT * from Ord_items;

Set Autotrace on

Select a.ord_id, A.ord_date, B.sku from orders A, Ord_items b where a.ord_id = b.ord_id;

Analyze table orders compute statistics;

Select a.ord_id, A.ord_date, B.sku from orders A, Ord_items b where a.ord_id = b.ord_id and ord_date = ' 15-jun-12 ';

Create INDEX Item_ordid on Ord_items (ord_id);

Analyze index Item_ordid compute statistics;

Select a.ord_id, A.ord_date, B.sku from orders A, Ord_items b where a.ord_id = b.ord_id;

Set Autotrace off

SELECT * FROM Orders partition (P12);

SELECT * from Ord_items partition (P12);

SELECT * from Ord_items partition (P11);

Scenario 4 Data Warehouse Management DW

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.