ORACLE Advanced Functions Apply ___ function

Source: Internet
Author: User
ORACLE Advanced Function Application
--Grouping functions
1, ROLLUP
--Subtotal, total of statistical standard grouping and corresponding dimensions
The Oracle GROUP BY statement also supports rollup and cube statements in addition to the most basic syntax. In the case of rollup (a, B, c), group by IS first performed on (A, B, c), then group BY, then (a) for group by, and the entire table is group by. If the group by CUBE (A, B, C), the group by IS first made to (A, B, C), followed by (A, B), (A, C), (a), (b, C), (b), (c), and finally the whole table is group by operation. GROUPING_ID () can beautify the effect:

Call vpd_pkg. Set_context_compid ('-1 ');
SELECT A.vpd_compid,to_char (a.ttime, ' MM '), COUNT (*)
From XTBILL2011 A
GROUP by ROLLUP (A.vpd_compid,to_char (a.ttime, ' MM '));
--2) Part Rollup grouping
--After the standard group, subtotal A.typeid
SELECT A.dwdh,a.year,a.typeid,count (*)
From Xtywbill A
GROUP by A.dwdh,a.year,rollup (A.typeid); 2. CUBE
--rollup can only be "descending from right to left", such as the need for a full range of dimensions to statistics, you need to use the cube function
--1)
SELECT A.vpd_compid,to_char (a.ttime, ' MM '), COUNT (*)
From XTBILL2011 A
GROUP by CUBE (A.vpd_compid,to_char (a.ttime, ' MM '));
--2) Part of the cube: You can remove totals and some unwanted subtotals.
SELECT A.vpd_compid,to_char (a.ttime, ' MM '), COUNT (*)
From XTBILL2011 A
GROUP by A.vpd_compid,cube (To_char (a.ttime, ' MM '));

3, GROUPING SETS
--Note: Focus only on single-column groupings, subtotals for certain dimensions
--group by grouping sets (A,B,C) is equivalent to group by A,group through B,group by C
--The Union all result of these three groups
--1)
SELECT A.vpd_compid,to_char (a.ttime, ' MM '), COUNT (*)
From XTBILL2011 A
GROUP by GROUPING SETS (A.vpd_compid,to_char (a.ttime, ' MM '));
--2) Part grouping sets group
--a subtotal based on group by, focusing only on subtotals
SELECT A.dwdh,a.year,a.typeid,count (*)
From Xtywbill A
GROUP by A.dwdh,grouping SETS (A.year,a.typeid);
SELECT A.vpd_compid,to_char (a.ttime, ' MM '), COUNT (*)
From XTBILL2011 A
GROUP by A.vpd_compid,grouping SETS (To_char (a.ttime, ' MM ')); 4, Cube,rollup as the grouping sets parameters
The--grouping sets operation only groups the single column without providing the total functionality, and if the grouping sets is required to provide the total functionality,
--You can use rollup or cube as parameters for grouping sets, such as the following statement to provide aggregate functionality:
SELECT A.vpd_compid,to_char (a.ttime, ' MM ') as Ttime,count (*)
From XTBILL2011 A
GROUP by GROUPING SETS (ROLLUP (A.vpd_compid), ROLLUP (To_char (a.ttime, ' MM '));
--This statement produces two total rows because the rollup or cube is the parameter of the grouping sets equivalent to each
--rollup and Cube are operated by union all. So the above statement is equivalent to:
SELECT A.vpd_compid,null as Ttime,count (*)
From XTBILL2011 A
GROUP by ROLLUP (a.vpd_compid)
UNION All
SELECT Null,to_char (a.ttime, ' MM '), COUNT (*)
From XTBILL2011 A
GROUP by ROLLUP (To_char (a.ttime, ' MM '));

5. Brief introduction of grouped columns:
--Group Mode: Rollup (a,b,c) <=>group by A,b,c; Group BY A,b; GROUP BY NULL
--Group Mode: Rollup (A, (b,c)) <=>group by A,b,c; Group by A; GROUP BY NULL
--Group Mode: Rollup (a,b), Rollup (c) <=>group by A,b,c; Group BY A,b; Group BY A,c; Group by A; Group by C; GROUP BY NULL
--Group Mode: Rollup (a,b), grouping sets (c) <=>group by A,b,c; Group BY A,c; GROUP BY C
--Group Mode: Rollup (a), rollup (b), Rollup (c) <=>group by A; Group by B; Group by C; Group BY A,b; Group BY A,c; Group BY B,c; Group BY A,b,c; GROUP BY NULL 6, grouping function
Grouping functions are useful in order to distinguish which are subtotals.
SELECT A.vpd_compid,to_char (a.ttime, ' mm '), COUNT (*), GROUPING (A.vpd_compid), GROUPING (To_char (a.ttime, ' mm ')),
DECODE (GROUPING (A.vpd_compid), 1, ' All units ', a.vpd_compid) Vpd_compid,
DECODE (To_char (a.ttime, ' mm '), 1, ' All month ', To_char (a.ttime, ' mm ')) Ttime
From XTBILL2011 A
GROUP by ROLLUP (A.vpd_compid,to_char (a.ttime, ' MM '));
--Filter Some grouped results
SELECT A.vpd_compid,to_char (a.ttime, ' mm '), COUNT (*), GROUPING (A.vpd_compid), GROUPING (To_char (a.ttime, ' mm ')),
DECODE (GROUPING (A.vpd_compid), 1, ' All units ', a.vpd_compid) Vpd_compid,
DECODE (To_char (a.ttime, ' mm '), 1, ' All month ', To_char (a.ttime, ' mm ')) Ttime
From XTBILL2011 A
GROUP by ROLLUP (A.vpd_compid,to_char (a.ttime, ' MM '))
Having GROUPING (a.vpd_compid) =1 OR GROUPING (To_char (a.ttime, ' MM ')) = 0; 7. grouping_id function
--use Rollup or cube and grouping_id combination to filter out the desired packet statistic information
SELECT A.vpd_compid,to_char (a.ttime, ' mm '), grouping_id (A.vpd_compid,to_char (a.ttime, ' mm ')), COUNT (*)
From XTBILL2011 A
GROUP by CUBE (A.vpd_compid,to_char (a.ttime, ' MM '))
Having grouping_id (A.vpd_compid,to_char (a.ttime, ' MM ')) = 2; --1,2,3,0
--grouping_id (A,B,C) filter Group results
Grouping level bit vectors grouping_id results
A,b,c 0 0 0 0
A,b 0 0 1 1
A 0 1 1 3
Total 1 1 1 7 8, group_id function
--to judge a repeating grouping
SELECT A.vpd_compid,to_char (a.ttime, ' MM ') as ttime,group_id () id,count (*)
From XTBILL2011 A
GROUP by GROUPING SETS (ROLLUP (A.vpd_compid), ROLLUP (To_char (a.ttime, ' MM '))
--having group_id () = 0;

9, example Application Description:
DROP TABLE T;
CREATE TABLE T (
Order_date date,--order dates
Order_no number,--order code
Order_book VARCHAR2 (10),--order books
Order_fee number,--Total order Amount
Order_num number
);
INSERT into T
SELECT to_date (' 2010-05-01 ', ' yyyy-mm-dd ') +level,
TRUNC (dbms_random.value*1000),
' Book1 ', 100+level,level
From DUAL
CONNECT by level<5;
INSERT into T
SELECT to_date (' 2010-06-01 ', ' yyyy-mm-dd ') +level,
TRUNC (dbms_random.value*1000),
' Book2 ', 200+level,level
From DUAL
CONNECT by level<5;
--Requirements: In each group of Order_book, in ascending order by date (Order_no sort regardless), the general group in front, subtotal in the final, total.
SELECT DECODE (grouping_id (order_date,order_no,order_book), 6,order_book| | ' Subtotal ',
7, ' Total ',
To_char (order_date, ' yyyy-mm-dd ')
) Order_date1,
Order_no,
DECODE (grouping_id (Order_date,order_no,order_book), 6,null,order_book) Order_book1,
SUM (Order_fee) Order_fee,
SUM (Order_num) order_num
From T
GROUP by ROLLUP (Order_book, (order_date,order_no))
Order BY Order_book,order_date;
--Two, analytic function
--over () open window function grouped by compid, sorted by Ttime, an incremental statistic processing of records
SELECT COUNT (Summoney) over (PARTITION by compid Order by Ttime) Cnt_sal,
SUm (Summoney) over (PARTITION by the Compid Order by Ttime) Sum_sal,
MAX (Summoney) over (PARTITION by the Compid Order by Ttime) Max_sal,
MIN (Summoney) over (PARTITION by the Compid Order by Ttime) Min_sal,
AVG (Summoney) over (PARTITION by compid Order by Ttime) avg_sal
From Xtywbill WHERE compid in (' 2601 ', ' 2602 ');
--1, Rows,range function to change window range:
--the window range is the first row of the partition and the last row of the partition
--sum (Summoney) over (partition by Compid ORDER by Ttime
--rows between unbounded preceding and unbounded following) sum_1
--the window range is more than the ttime-365 days of this record, and all amounts up to the current record are cumulative
--sum (Summoney) over (partition by Compid ORDER by Ttime
--range 365/*value_expr*/preceding) sum_2--2, keep usage
SELECT Billid,vpd_compid,ttime,zje,
Dense_rank () over (PARTITION by vpd_compid Order by Ttime) Dense_rank,
MIN (Zje) KEEP (Dense_rank A/ttime) over (PARTITION by Vpd_compid) Min_first,
MIN (Zje) KEEP (Dense_rank Last order by Ttime) over (PARTITION by Vpd_compid) Min_last,
MAX (Zje) KEEP (Dense_rank Last order by Ttime) over (PARTITION by Vpd_compid) max_last
From XTBILL2011;
--min (Zje) Dense_rank First ORDER by Ttime is the smallest amount found from the oldest record in time.
--max (Zje) Dense_rank Last order by Ttime finds the largest amount from the late record.
Note: Keep can only be used with Dense_rank First\dense_rank last. --3, Statistical functions:
SELECT Billid,vpd_compid,ttime,zje,
MIN (Zje) over (PARTITION by vpd_compid Order by Ttime) as Comp_min,
MAX (Zje) over (PARTITION by vpd_compid Order by Ttime) as Comp_max,
AVG (Zje) over (PARTITION by vpd_compid Order by Ttime) as Comp_avg,
SUM (Zje) over (PARTITION by vpd_compid Order by Ttime) as Comp_sum,
COUNT (*) over (PARTITION by vpd_compid Order by Zje) as Count_by,
COUNT (*) over (PARTITION by vpd_compid ORDER by Zje RANGE BETWEEN preceding) as following
From XTBILL2011;

--4, sorting functions:
--Q: Rank () \dense_rank () Difference:
RANK () Dense_rank ()
1 1
1 1
3 2
--log (zje,1,0)--Find the previous record value, 1: To find the previous value, 0: Indicates that the default value cannot be found
--lead (zje,1)--Find the value of the following record, 1: To find the previous value, 0: Indicates that the default value cannot be found
--first_value ()--Remove the first value from the window.
--last_value ()--Take the last value of the window, note that it is in a progressive order, all of this.
The--null value row is last, as the maximum column SELECT Billid,vpd_compid,ttime,zje,
/*rank () over (PARTITION by vpd_compid Order by Zje) as RANK,
Dense_rank () over (PARTITION by vpd_compid Order by Zje) as Dense_rank,
MIN (Zje) KEEP (Dense_rank A/ttime) over (PARTITION by vpd_compid) worst,
MAX (Zje) KEEP (PARTITION by Vpd_compid) Best, Dense_rank
LAG (zje,1,0) over (order by Ttime) as Prev_zje,
Lead (zje,1,0) over (order by ttime) as next_sal,*/
First_value (billID) over (PARTITION by vpd_compid Order by Zje) as FIRST_VALUE_ASC,
First_value (billID) over (PARTITION by vpd_compid Order by NVL (zje,0) DESC) as First_value_desc,
Last_value (billID) over (PARTITION by vpd_compid Order by NVL (zje,0)) as LAST_VALUE_ASC,
Last_value (billID) over (PARTITION by vpd_compid ORDER by Zje DESC) as Last_value_desc,
Row_number () over (PARTITION by vpd_compid Order by billID) as Row_number
From XTBILL2011 to Vpd_compid,zje DESC;

--5, Ratio_to_report () function
--Records, percentage of
SELECT Billid,vpd_compid,ttime,zje,
Ratio_to_report (Zje) over ()
From XTBILL2011;


Third, tree query-related functions
--1, order siblings by first by the superior compid sort, then the subordinate compid sort.
--2, level and Rpad\lpad combined application to construct an accessible hierarchy
--3 and Connect_by_isleaf determine whether the current node is a leaf node, 0 is not a leaf node, 1 is the leaf node.
--(if there is no subordinate node is the path node)
--4, Connect_by_root lists the value of the field corresponding to the root node of the tree, combined with the field values
The--5, Sys_connect_by_path function is the beginning of the start with the start of the traversal, and note its traversal to the node,
--the place where start with starts is treated as the root node, and the path that is traversed to is based on the delimiter in the function to form a new
--String.
SELECT Level,lpad (', (LEVEL-1) *3) | |a.compid,connect_by_isleaf,
Connect_by_root compname root_compname,connect_by_iscycle "CYCLE",
Sys_connect_by_path (Compid, '/') Format_compid,
Sys_connect_by_path (compname, '-> ') Format_compname,
A.*
From CompanyInfo A
START with a.compid = ' 0000 '
CONNECT by nocycle PRIOR a.compid = a.pcompid
Order siblings by Compid;
SELECT * from CompanyInfo WHERE compid= ' 0000 ' for UPDATE;
--with keyword and tree query combination application Example:
--Query and compid= ' 0001 ' sibling node
With Companyinfo_tmp as
(SELECT level Lev,lpad (", (LEVEL-1) *3) | |a.compid,connect_by_isleaf,
A.*
From CompanyInfo A
START with a.compid = ' 0000 '
CONNECT by PRIOR a.compid = A.pcompid
)
SELECT * from Companyinfo_tmp
WHERE lev= (SELECT LEV from companyinfo_tmp
WHERE compid= ' 0001 ') Order by Compid;

Iv. Other:
NVL (A,B); --If A is null, return B, otherwise return a (that is, do not convert)
NVL2 (A,B,C); --whether a is null to convert, if NULL, return C, otherwise return B.
Nullif (A,B); --to determine if a and B are equal, and to return NULL if they are equal, and not to support type auto conversion.
Coalesc (a,b,..., N); --Returns the 1th Non-null value from left to right, or null if all the list elements are null. It has short-circuit calculation function,
-for example, A is a null,b non-null, then the value of B is returned and the next value is no longer evaluated.
Decode\sign;
--Application Examples:
With T as
(SELECT level ID
From DUAL
CONNECT by level<10
)
SELECT ID,
DECODE ( -1,sign (ID-5), ' low ',
SIGN (ID-8), ' mid ',
' High ') RESULTS
From T;
One of the common applications of--decode is to implement fixed row columns:
--A comprehensive application of the following decode: Querying the unbalanced data of the borrowed goods
Call vpd_pkg. Set_context_compid ('-1 ');
SELECT billid,sum (DECODE (bzjd,1,tmoneyf,0) as Dfje,sum (DECODE (BZJD,1,0,TMONEYF)) as Jfje
From (SELECT A.billid,decode (A.CXBZ,1,1-A.BZJD,A.BZJD) as Bzjd,sum (DECODE (A.CXBZ,1,-1*A.TMONEYF,A.TMONEYF)) as Tmoneyf
From XTBILLMX2010 a,xtbill2010 B
WHERE B.billid=a.billid and B.bzwc=1
GROUP by A.billid,decode (A.CXBZ,1,1-A.BZJD,A.BZJD))
GROUP by billID
Having SUM (DECODE (bzjd,1,tmoneyf,0)) <>sum (DECODE (BZJD,1,0,TMONEYF));
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.