Oracle pl/SQL practice, oraclepl

Source: Internet
Author: User
Tags dedicated server

Oracle pl/SQL practice, oraclepl

I have read oracle pl/SQL practices over the past few days. In order to study the performance of SQL statements, I will first study the impact of cursors on code.

1. Row-by-row Processing



The program declares a cursor c1, and then implicitly opens the cursor with the cursor for loop. The program queries the customers table for each row retrieved from the cursor c1, fill in the values of first_name and last_name to the variable, and then insert a row of data to the top_sales_mers MERs table.

Question 1:

Code List 1-1Programming method is very problematic. Even if the SQL statements called in the loop are highly optimized, execution of the program will consume a lot of time. Assume that the SQL statement used to query the MERs table consumes 0.1 seconds and the INSERT statement consumes 0.1 seconds. Therefore, it takes 0.2 seconds to execute each SQL statement in a loop. If the cursor c1 extracts 100 000 rows, the total time is 100 000 multiplied by 0.2 seconds, that is, 20 000 seconds, that is, about 5.5 hours. It is difficult to optimize the structure of this program. For obvious reasons, TomKyte defines this processing method as slow and slow processing (slow-by-slow processing ).

Question 2:

Code List 1-1The Code also has an inherent problem. The SQL statements called from the PL/SQL loop are repeatedly switched and executed between the PL/SQL engine and the SQL engine. The switching between these two environments is called context switching. Context switching increases the running time and unnecessary CPU overhead. You should eliminate or reduce the switching between the two environments to reduce the number of context switches. In general, row-by-row processing should be prohibited. A better programming practice is to convert the code listing 1-1 into an SQL statement. The code in the code list is rewritten in 1-2, completely avoiding PL/SQL



Code List 1-2In addition to solving the defects of Row-by-row processing, there are more advantages. The rewritten SQL statement can be optimized using parallel execution. Multiple parallel execution processes can greatly reduce the execution time. In addition, the Code becomes concise and readable.


2. nested row-by-row Processing

InCode List 1-3C1, c2, and c3 are nested cursors. Cursor c1 is a top-level cursor that retrieves data from Table t1. c2 is an open cursor that passes the value obtained from cursor c1. c3 is also an open cursor that passes the value obtained from cursor c2. There is an UPDATE statement that executes each row returned by the cursor c3. Although the UPDATE statement has been optimized to be executed once for only 0.01 seconds, the performance of the program is still intolerable due to the deep nested cursor. Assuming that the cursor c1, c2, and c3 return 20, 50, and 100 rows respectively, the above code needs to loop through 100 lines, and the total execution time of the program exceeds 1000 seconds. It is often necessary to completely rewrite the optimization of such programs.


Code List 1-3Another problem with the code in is to execute an UPDATE statement first. If the UPDATE statement produces
No_data_found exception ④, then execute an INSERT statement. This type of problem can be solved by using the MERGE statement from the PL/SQL
Go to SQL engine for processing.
Conceptually, the triple loop in code 1-3 represents the equivalent join between table t1, table t2, and table t3. Code List 1-4
The SQL statement that uses the table alias t modified according to the above logic. The combination of UPDATE and INSERT logic is replaced by MERGE statements,
The MERGE Syntax provides the ability to update existing rows and insert nonexistent rows.


Do not write in-depth nested cursor code in PL/SQL. Review the logic of this type of code to see if it can be replaced by SQL statements.

3. lookup Query

Lookup query is generally used to fill in certain variables or perform legal verification of data. However, executing a lookup query in a loop may cause performance problems. In code list 1-5, the highlighted part is to use the lookup query to obtain the country_name value. The program executes a query for each row in the cursor c1 to obtain the value of country_name. When the number of rows obtained from the cursor c1 increases, the number of queries executed by the lookup type increases, resulting in low code efficiency.




Code List 1-5The code is too simplified. The lookup query of country_name can actually be rewritten as a connection in the primary cursor c1 itself. The first step is to change the lookup query to a connection. However, in actual applications, this rewrite may not be possible. If you cannot rewrite the code to reduce the number of times the query is executed, there is another option. You can define an associated array to cache the query results, so that you can reuse the array in subsequent execution, which can effectively reduce the execution of the query. Code List 1-6 demonstrates the array cache technology. Instead of executing a query in each row returned by the cursor c1 to obtain country_name, you can use an associated array named l_country_names to store the country_id and country_name key-value pairs in this example. The associated array is similar to the index. Any given value can be accessed by a key value. Before performing a lookup query, use the EXISTS operation to verify the existence of an element in an array that matches the country_id key value. If such an element EXISTS in the array, country_name is obtained from the array instead
You need to perform a lookup query. If no such element exists, perform a lookup query and save the query result as a new element to the array. You also need to understand that this technique is very suitable for statements with few key values. In this example, when the number of unique values in the country_id column is small, the query may run less times. For example, in the example mode, the maximum number of times a query is executed is 23, because the country_id column has only 23 different values.





Note that the memory required for associating arrays is allocated in the PGA (Program Global Area) of the dedicated server process in the database server, if thousands of connections need to cache the intermediate results of the program to an array, the memory usage will increase significantly. You should master the memory usage increase of each process, and design the database server to adapt to the memory growth.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.