SQL Optimization | Java face question

Source: Internet
Author: User
Tags connection pooling dba joins mathematical functions sql server query one table rollback sorts

See a very comprehensive SQL optimization article, in the development of the work often do not consider the performance of the lack of (at the beginning of the data volume is not seen the difference in speed). But the more you write, the more you should standardize the wording.

Original link: http://www.jfox.info/SQL-you-hua.html

By Lee- Last updated:Friday, May 17, 2013 database optimization issues first, the question of the proposed

In the early stage of application system development, due to less development database data, the query SQL statement, complex view of the writing of the performance of the SQL statement is not good or bad, but if the application system submitted to the actual application, as the data in the database increases, The response speed of the system is one of the most important problems that the system needs to solve at present. An important aspect of system optimization is the optimization of SQL statements. For the massive data, the speed difference between the inferior SQL statement and the high-quality SQL statement can reach hundreds of times, it can be seen that a system is not simply able to achieve its function, but to write high-quality SQL statements, improve the availability of the system.

In most cases, Oracle uses indexes to traverse tables more quickly, and the optimizer improves performance primarily based on defined indexes. However, if the SQL code written in the WHERE clause of the SQL statement is not reasonable, it will cause the optimizer to delete the index and use a full table scan, which is generally referred to as the poor SQL statement. When writing SQL statements, we should be aware of the principles by which the optimizer removes the index, which helps to write high-performance SQL statements.

  second, the SQL statement writing attention issues

The following is a detailed description of the issues that need to be noted in writing the WHERE clause of some SQL statements. In these where clauses, even if there are indexes on some columns, because poor SQL is written, the system cannot use the index while running the SQL statement, and it also uses a full table scan, which results in a very slow response.

1. Is null and is not NULL

You cannot use NULL as an index, and any column that contains null values will not be included in the index. Even if the index has more than one column, the column is excluded from the index as long as there is a column in the column that contains null. This means that if a column has a null value, even indexing the column does not improve performance.

Any statement optimizer that uses is null or is not NULL in the WHERE clause is not allowed to use the index.

2. Join columns

For a joined column, the optimizer does not use the index, even if the last join value is a static value. Let's take a look at an example, assuming that there is a staff table (employee), for a worker's surname and name in two columns (First_Name and last_name), now to query a Bill Clinton Cliton.

Here is an SQL statement that takes a join query.

SELECT * from Employss where first_name| | "| | last_name = ' Beill Cliton ';

The above statement can be used to find out if there is a bill Cliton this employee, but it is important to note that the System optimizer does not use an index created based on last_name.

When written in this SQL statement, the Oracle system can take an index created based on last_name.

where first_name = ' Beill ' and last_name = ' Cliton ';

. A like statement with wildcard characters (%)

This is also the case with the above example. The current demand is such that the workers ' table should be queried for the person whose name contains Cliton. You can use the following query SQL statement:

SELECT * from the employee where last_name like '%cliton% ';

This is because the wildcard character (%) appears at the beginning of the search term, so the Oracle system does not use the last_name index. In many cases it may not be possible to avoid this, but be sure to be in the bottom of your mind, so using a wildcard will slow down the query. However, when wildcards appear elsewhere in a string, the optimizer can take advantage of the index. The indexes are used in the following query:

SELECT * from the employee where last_name like ' c% ';

4. Order by statement

The order BY statement determines how Oracle will sort the returned query results. The ORDER BY statement has no special restrictions on the columns to be sorted, or it can be added to a column (like joins or additions). Any non-indexed item in the ORDER BY statement, or a computed expression, will slow down the query.

Double-check the order BY statement to find non-indexed items or expressions that degrade performance. The solution to this problem is to rewrite the order BY statement to use the index, or you can establish another index for the column you are using, and you should absolutely avoid using an expression in the ORDER BY clause.

5. Not

We often use logical expressions in the WHERE clause when querying, such as greater than, less than, equal to, and not equal to, and can also use and (with), or (or), and not (non). Not can be used to negate any logical operation symbol. The following is an example of a NOT clause:

... where not (status = ' VALID ')

If you want to use not, you should precede the phrase with parentheses and precede the phrase with the NOT operator. The NOT operator is included in another logical operator, which is the not equal to (<>) operator. In other words, the not is still in the operator, even if the not word is not explicitly added to the query where clause, see the following example:

... where status <> ' INVALID ';

For this query, it can be rewritten to not use not:

SELECT * FROM employee where salary<3000 or salary>3000;

Although the results of these two queries are the same, the second query scenario is faster than the first query scenario. The second query allows Oracle to use indexes on salary columns, while the first query cannot use indexes.

Although the results of these two queries are the same, the second query scenario is faster than the first query scenario. The second query allows Oracle to use indexes on salary columns, while the first query cannot use indexes.

We have to do not only write SQL, but also to write a good performance of SQL, the following for the author to learn, excerpts, and summarize some of the information to share with you!

(1) Select the most efficient table name order (valid only in the rule-based optimizer):
The ORACLE parser processes the table names in the FROM clause in a right-to-left order, and the FROM clause is written in the final table (the underlying table, driving tables) will be processed first, and in the case where the FROM clause contains more than one table, you must select the table with the fewest number of record bars as the underlying table. If you have more than 3 tables connected to the query, you need to select the crosstab (intersection table) as the underlying table, which refers to the table that is referenced by the other table.
(2) The connection order in the WHERE clause. :
Oracle uses a bottom-up sequential parsing where clause, according to which the connection between tables must be written before other where conditions, and those that can filter out the maximum number of records must be written at the end of the WHERE clause.
(3) Avoid using ' * ' in the SELECT clause:
In the process of parsing, Oracle converts ' * ' to all column names, which is done by querying the data dictionary, which means more time is spent
(4) Reduce the number of access to the database:
Oracle has done a lot of work internally: Parsing SQL statements, estimating index utilization, binding variables, reading chunks, and so on;
(5) Reset the ArraySize parameter in Sql*plus, sql*forms and pro*c to increase the amount of data retrieved per database access, with a recommended value of 200
(6) Use the Decode function to reduce processing time:
Use the Decode function to avoid duplicate scans of the same record or duplicate connections to the same table.
(7) Integration of simple, unrelated database access:
If you have a few simple database query statements, you can integrate them into a single query (even if they are not related)
(8) Delete duplicate records:
The most efficient method of deleting duplicate records (because of the use of rowID) Example:
DELETE from EMP E WHERE e.rowid > (SELECT MIN (X.ROWID)
From EMP X WHERE x.emp_no = e.emp_no);
(9) Replace Delete with truncate:
When you delete a record in a table, in general, the rollback segment (rollback segments) is used to hold information that can be recovered. If you do not have a COMMIT transaction, Oracle restores the data to the state it was before it was deleted (exactly before the delete command was executed) and when the truncate is applied, the rollback segment no longer holds any recoverable information. When the command runs, The data cannot be restored. So very few resources are invoked and execution times are short. (Translator Press: truncate only in Delete full table applies, truncate is DDL is not DML)
(10) Use commit as much as possible:
Whenever possible, use commit as many of the programs as possible, so that the performance of the program is improved and the requirements are reduced by the resources freed by the commit:
Resources Freed by Commit:
A. Information for recovering data on a rollback segment.
B. Locks acquired by program statements
C. Redo space in the log buffer
D. Oracle manages internal spending on 3 of these resources
(11) Replace the HAVING clause with a WHERE clause:
Avoid having a HAVING clause that filters the result set only after all records have been retrieved. This processing requires sorting, totals, and so on. If you can limit the number of records through the WHERE clause, you can reduce this overhead. (Non-Oracle) on, where, have the three clauses that can be added conditionally, on is the first execution, where the second, having the last, because on is the non-qualifying records filtered before the statistics, it can reduce the intermediate operation to process the data, It should be said that the speed is the fastest, where should also be faster than having to, because it filters the data before the sum, in two table joins only use on, so in a table, the left where and have compared. In the case of this single-table query statistics, if the conditions to be filtered do not involve the fields to be calculated, then they will be the same result, but where you can use the Rushmore technology, and have not, at the speed of the latter slow if you want to relate to the calculated field, it means that before the calculation, The value of this field is indeterminate, according to the workflow of the previous write, where the action time is done before the calculation, and having is calculated after the function, so in this case, the results will be different. On a multi-table join query, on has an earlier effect than where. The system first synthesizes a temporary table based on the conditions of the joins between the tables, then the where is filtered, then calculated, and then filtered by having. Thus, to filter the conditions to play the right role, first of all to understand when this condition should play a role, and then decided to put it there
(12) Reduce the query on the table:
In the SQL statement that contains the subquery, pay particular attention to reducing the query on the table. Example:
Select Tab_name from TABLES WHERE (tab_name,db_ver) = (select
Tab_name,db_ver from tab_columns WHERE VERSION = 604)
(13) Improve SQL efficiency through internal functions.:
Complex SQL often sacrifices execution efficiency. The ability to master the above application function to solve the problem is very meaningful in practical work.
(14) using the alias of the table:
When you concatenate multiple tables in an SQL statement, use the alias of the table and prefix the alias to each column. This reduces the time to parse and reduces the syntax errors caused by column ambiguity.
(15) Replace in with exists with not exists instead of in:
In many base-table-based queries, it is often necessary to join another table in order to satisfy one condition. In this case, using EXISTS (or not EXISTS) will usually improve the efficiency of the query. In a subquery, the NOT IN clause performs an internal sort and merge. In either case, not in is the least effective (because it performs a full table traversal of the table in the subquery). To avoid using not, we can change it to an outer join (Outer Joins) or not EXISTS.
Example:
(efficient) SELECT * from EMP (base table) where EMPNO > 0 and EXISTS (select ' X ' from DEPT where DEPT. DEPTNO = EMP. DEPTNO and LOC = ' Melb ')
(Low efficiency) SELECT * from EMP (base table) where EMPNO > 0 and DEPTNO in (SELECT DEPTNO from DEPT where LOC = ' Melb ')
(16) Identify the SQL statement for ' inefficient execution ':
Although there are many graphical tools for SQL optimization, it is always a good idea to write your own SQL tools to solve the problem:
SELECT executions, disk_reads, Buffer_gets,
ROUND ((buffer_gets-disk_reads)/buffer_gets,2) Hit_radio,
ROUND (disk_reads/executions,2) Reads_per_run,
Sql_text
From V$sqlarea
WHERE executions>0
and buffer_gets > 0
and (Buffer_gets-disk_reads)/buffer_gets < 0.8
ORDER by 4 DESC;

(17) Use Index to improve efficiency:
An index is a conceptual part of a table used to improve the efficiency of retrieving data, and Oracle uses a complex self-balancing b-tree structure. In general, querying data through an index is faster than a full table scan. When Oracle finds the best path to execute queries and UPDATE statements, the Oracle Optimizer uses the index. Also, using indexes when joining multiple tables can improve efficiency. Another advantage of using an index is that it provides the uniqueness of the primary key (primary key) Validation: Those long or long raw data types, you can index almost all the columns. In general, using indexes in large tables is particularly effective. Of course, you will also find that using indexes can also improve efficiency when scanning small tables. Although the use of indexes can improve the efficiency of query, but we must also pay attention to its cost. Indexes require space to store, and they need to be maintained regularly, and the index itself is modified whenever a record is added to a table or the index column is modified. This means that each record's insert, DELETE, and update will pay more than 4, 5 disk I/O. Because indexes require additional storage space and processing, those unnecessary indexes can slow query response time. It is necessary to periodically refactor the index.:
ALTER INDEX <INDEXNAME> REBUILD <TABLESPACENAME>
18) Replace distinct with exists:
Avoid using DISTINCT in the SELECT clause when submitting a query that contains one-to-many table information, such as a departmental table and an employee table. It is generally possible to consider replacing with exist, EXISTS makes the query faster because the RDBMS core module will return the results immediately after the conditions of the subquery have been met. Example:
(inefficient):
SELECT DISTINCT dept_no,dept_name from DEPT D, EMP E
WHERE d.dept_no = E.dept_no
(efficient):
Select Dept_no,dept_name from DEPT D WHERE EXISTS (select ' X '
From EMP E WHERE e.dept_no = d.dept_no);
SQL statements are capitalized, because Oracle always parses the SQL statements first, converting lowercase letters to uppercase and then executing
(20) Use the connector "+" connection string sparingly in Java code!
(21) Avoid using not on indexed columns typically,
We want to avoid using not on indexed columns, and not to have the same effect as using functions on indexed columns. When Oracle "encounters" not, he stops using the index instead of performing a full-table scan.
(22) Avoid using calculations on indexed columns.
Where clause, if the index column is part of a function. The optimizer will use a full table scan without using an index.
Example:
Low efficiency:
SELECT ... From DEPT WHERE SAL * > 25000;
Efficient:
SELECT ... From DEPT WHERE SAL > 25000/12;
(23) Replace > with >=
Efficient:
SELECT * from EMP WHERE DEPTNO >=4
Low efficiency:
SELECT * from EMP WHERE DEPTNO >3
The difference between the two is that the former DBMS will jump directly to the first record that dept equals 4 and the latter will first navigate to the Deptno=3 record and scan forward to the first record with a dept greater than 3.
(24) Replace or with union (for indexed columns)
In general, replacing or in a WHERE clause with Union will have a good effect. Using or on an indexed column causes a full table scan. Note that the above rules are valid only for multiple indexed columns. If a column is not indexed, the query efficiency may be reduced because you did not select or. In the following example, indexes are built on both loc_id and region.
Efficient:
SELECT loc_id, Loc_desc, Region
From location
WHERE loc_id = 10
UNION
SELECT loc_id, Loc_desc, Region
From location
WHERE region = "MELBOURNE"
Low efficiency:
SELECT loc_id, Loc_desc, Region
From location
WHERE loc_id = ten OR region = "MELBOURNE"
If you persist in using or, you need to return the least logged index column to the front.
(25) Replace or with in
This is a simple and easy-to-remember rule, but the actual execution effect has to be tested, and under Oracle8i, the execution path seems to be the same.
Low efficiency:
SELECT .... From location WHERE loc_id = ten or loc_id = or loc_id = 30
Efficient
SELECT ... From location WHERE loc_in in (10,20,30);
(26) Avoid using is null and is not NULL on an indexed column
To avoid using any nullable columns in the index, Oracle will not be able to use the index. For single-column indexes, this record will not exist in the index if the column contains null values. For composite indexes, if each column is empty, the same record does not exist in the index. If at least one column is not empty, the record exists in the index. For example, if a uniqueness index is established on column A and column B of a table, and the table has a value of a, a and a record of (123,null), Oracle will not accept the next record (insert) with the same A, B value (123,null). However, if all the index columns are empty, Oracle will assume that the entire key value is empty and null is not equal to NULL. So you can insert 1000 records with the same key value, of course they are empty! Because null values do not exist in the index column, a null comparison of indexed columns in the WHERE clause causes Oracle to deactivate the index.
Inefficient: (Index invalidation)
SELECT ... From DEPARTMENT WHERE dept_code are not NULL;
Efficient: (Index valid)
SELECT ... From DEPARTMENT WHERE Dept_code >=0;
(27) Always use the first column of an index:
If the index is built on more than one column, the optimizer chooses to use the index only if its first column (leading column) is referenced by a WHERE clause. This is also a simple and important rule, when referencing only the second column of an index, the optimizer uses a full table scan and ignores the index
28) Replace union with Union-all (if possible):
When the SQL statement requires a union of two query result sets, the two result sets are merged in a union-all manner and then sorted before the final result is output. If you use UNION ALL instead of union, this sort is not necessary. Efficiency will therefore be improved. It is important to note that the UNION all will output the same record in the two result set repeatedly. So you still have to analyze the feasibility of using union all from the business requirements. The UNION will sort the result set, which will use the memory of the sort_area_size. The optimization of this memory is also very important. The following SQL can be used to query the consumption of sorts
Low efficiency:
SELECT Acct_num, Balance_amt
From Debit_transactions
WHERE tran_date = ' 31-dec-95′
UNION
SELECT Acct_num, Balance_amt
From Debit_transactions
WHERE tran_date = ' 31-dec-95′
Efficient:
SELECT Acct_num, Balance_amt
From Debit_transactions
WHERE tran_date = ' 31-dec-95′
UNION All
SELECT Acct_num, Balance_amt
From Debit_transactions
WHERE tran_date = ' 31-dec-95′
(29) Where to replace order by:
The ORDER by clause uses the index only under two strict conditions.
All columns in an order by must be in the same index and remain in the order in which they are arranged in the index.
All columns in the ORDER by must be defined as non-empty.
The index used in the WHERE clause and the index used in the ORDER BY clause cannot be tied.
For example:
Table Dept contains the following:
Dept_code PK not NULL
Dept_desc not NULL
Dept_type NULL
Inefficient: (index not used)
SELECT Dept_code from DEPT ORDER by Dept_type
Efficient: (using index)
SELECT Dept_code from DEPT WHERE dept_type > 0
(30) Avoid changing the type of indexed columns:
Oracle automatically makes simple type conversions to columns when comparing data of different data types.
Suppose Empno is an indexed column of a numeric type.
SELECT ... From EMP WHERE EMPNO = ' 123′
In fact, after the Oracle type conversion, the statement translates to:
SELECT ... From EMP WHERE EMPNO = To_number (' 123′)
Fortunately, the type conversion did not occur on the index column, and the purpose of the index was not changed.
Now, suppose Emp_type is an indexed column of a character type.
SELECT ... From EMP WHERE Emp_type = 123
This statement is translated by Oracle to:
SELECT ... From EMP Whereto_number (emp_type) =123
This index will not be used because of the type conversions that occur internally! To avoid the implicit type conversion of your SQL by Oracle, it is best to explicitly express the type conversions. Note When comparing characters to numbers, Oracle takes precedence over numeric types to character types
(31) The WHERE clause to be careful:
The WHERE clause in some SELECT statements does not use an index. Here are some examples.
In the following example, (1) '! = ' will not use the index. Remember, the index can only tell you what exists in the table, not what does not exist in the table. (2) ' | | ' is a character join function. As with other functions, the index is deactivated. (3) ' + ' is a mathematical function. As with other mathematical functions, the index is deactivated. (4) The same index columns cannot be compared to each other, which will enable full table scanning.
A. If the number of records in a table that has more than 30% data is retrieved. Using indexes will not be significantly more efficient.
B. In certain situations, using an index may be slower than a full table scan, but this is the same order of magnitude difference. In general, the use of indexes than the full table scan to block several times or even thousands of times!
(33) Avoid using resource-intensive operations:
SQL statements with Distinct,union,minus,intersect,order by will start the SQL engine
Performs a resource-intensive sorting (sort) function. Distinct requires a sort operation, while the others need to perform at least two sorting. Typically, SQL statements with union, minus, and intersect can be overridden in other ways. If your database is well-sort_area_size, using union, minus, intersect can also be considered, after all, they are very readable
(34) Optimize GROUP by:
Increase the efficiency of the group BY statement by filtering out unwanted records before group by. The following two queries return the same result but the second one is significantly faster.
Low efficiency:
SELECT JOB, AVG (SAL)
From EMP
GROUP by JOB
Having JOB = ' president '
OR JOB = ' MANAGER '
Efficient:
SELECT JOB, AVG (SAL)
From EMP
WHERE JOB = ' President '
OR JOB = ' MANAGER '
GROUP by JOB

====================================
====================================
If you are in charge of a SQL Server-based project, or if you have just touched SQL Server, you are likely to face some database performance issues, and this article will give you some useful guidance (most of which can also be used for other DBMS).
Here, I'm not going to introduce the tips of using SQL Server, nor can I offer a package of cures, and what I do is summarize some of the experiences--about how to form a good design. These experiences come from the lessons I've endured over the years, and I've seen many of the same design mistakes repeated over and over again.
First, understand the tools you use
Don't underestimate this, this is the most critical piece I've ever described in this article. You may also see that a lot of SQL Server programmers don't have all the T-SQL commands and the useful tools that SQL Server provides.
What I want to waste one months to learn the SQL commands that I will never use??? "You may say so." Yes, you don't have to do that. But you should browse all T-SQL commands for a weekend. Here, your task is to understand, in the future, when you design a query, you will remember: "Right, here is a command to fully implement the function I need", so, to MSDN to see the exact syntax of this command.
Second, do not use cursors
Let me repeat: do not use cursors. If you want to disrupt the performance of your entire system, they are your most effective choice. Most beginners use cursors without being aware of their impact on performance. They take up memory and lock the table in their uncanny way, and they're like snails. And worst of all, they can make all the performance optimizations your DBA can do. I wonder if you know that every fetch is an execution of a select command? This means that if your cursor has 10,000 records, it will execute 10,000 times select! If you use a set of SELECT, Update, or delete to do the work, it will be much more efficient.
Beginners generally think that using cursors is a familiar and comfortable way of programming, but unfortunately, this can lead to poor performance. Obviously, the overall purpose of SQL is what you want to achieve, not how it is implemented.
I used T-SQL to rewrite a cursor-based stored procedure that had only 100,000 records, the original stored procedure took 40 minutes to complete, and the new stored procedure took only 10 seconds. Here, I think you should be able to see what a incompetent programmer is doing!!!
We can write a small program to get and process the data and update the database, which is sometimes more efficient. Remember: T-SQL is powerless for loops.
Let me remind you again: there is no benefit to using cursors. In addition to the DBA's work, I have never seen the use of cursors to do any work effectively.
Third, standardize your data sheet
Why not normalize the database? There are about two excuses: performance considerations and sheer laziness. As for the 2nd, you will have to pay the price sooner or later. With regard to performance issues, you don't need to optimize things that are not slow at all. I often see some programmers "anti-normalization" database, their reason is "the original design is too slow", but the result is often they make the system slower. The DBMS is designed to handle the canonical database, so remember to design the database according to the requirements of normalization.
Four, do not use SELECT *
This is not easy to do, I know too much, because I often do it myself. However, if you specify the columns you need in Select, the following benefits will be present:
1 reduced memory consumption and network bandwidth
2 You can get a more secure design
3 give the query optimizer the opportunity to read all required columns from the index
V. Understand what you are going to do with the data
Creating a robust index for your database is a piece of merit. But to do this is an art. Whenever you add an index to a table, select is faster, and insert and delete are significantly slower because creating a maintenance index requires a lot of extra work. Obviously, the key to the problem here is: what do you want to do with this table? This problem is not very well grasped, especially when it comes to delete and update, because these statements often contain the Select command in the Where section.
Vi. do not create an index for the "Sex" column
First, we must understand how the index accelerates access to the table. You can interpret an index as a way to partition a table based on certain criteria. If you create an index for a column like "Gender," you simply divide the table into two parts: male and female. What is the point of dividing a table with 1,000,000 records? Remember: Maintaining indexes is relatively time-consuming. When you design an index, follow these rules: The number of columns may contain different content, such as: name + Province + gender.
Vii. Use of transactions
Use transactions, especially when queries are time-consuming. If there is a problem with the system, doing so will save your life. Programmers who have experience in general have experience--you often encounter unforeseen situations that can cause the stored procedure to crash.
Eight, careful deadlock
Access your table in a certain order. If you lock table A and then lock table B, you must lock them in this order in all stored procedures. If you (inadvertently) lock table B in a stored procedure, and then lock Table A, this could result in a deadlock. If the locking sequence is not well designed in advance, deadlocks are not easily found.
Nine, do not open a large data set
A frequently asked question is: how can I quickly add 100,000 records to a ComboBox? It's not right, you can't and you don't need to do it. It's easy for your users to browse 100,000 records to find the records they need, and he'll curse you. Here, what you need is a better UI, you need to display no more than 100 or 200 records for your users.
Ten, do not use server-side cursors
Compared to server-side cursors, client cursors can reduce server and network overhead and also reduce lockout time.
Xi. using parameter queries
Sometimes, I see a problem like this in the CSDN Technology Forum: "SELECT * from a WHERE a.id= ' a ' B, because the single quote query has an exception, what should I do?" , and the general answer is: use two single quotes instead of single quotes. This is wrong. This will not cure the problem, because you will also encounter some other characters such problems, not to mention that this will lead to serious bugs, in addition, this would also make the SQL Server buffer system does not play its due role. Using parameter queries, drastic, none of these problems exist.
12. Use a database with a large amount of data in program encoding
The test database used by programmers in the development of the general data is not large, but often the end user's data volume is very big. Our usual practice is wrong, the reason is very simple: now hard disk is not very expensive, but why the performance problem to wait until the time has not been recovered to be noticed?
13. Do not import large batches of data using insert
Please do not do this unless it is necessary. Use UTS or bcp so you can have flexibility and speed at one stroke.
14. Notice the timeout problem
When querying a database, the default for a general database is small, such as 15 seconds or 30 seconds. Some queries run longer than this, especially when the amount of data in the database is constantly getting larger.
Do not ignore issues that modify the same record at the same time
Sometimes, two users modify the same record at the same time, so that the latter modifies the previous modifier's actions, and some updates are lost. It is not difficult to handle this situation: Create a timestamp field, check it before writing, and if allowed, merge the modifications if there is a conflict, prompt the user.
16. Do not perform select MAX (ID) in the main table when inserting records in the detail table
This is a common error when two users insert data at the same time, which causes an error. You can use Scope_identity,ident_current and IDENTITY. If possible, do not use identity because it can cause problems in the event of a trigger (see the discussion here).
17. Avoid setting the column to nullable
If possible, you should avoid setting the column to nullable. The system allocates an extra byte for each row of the nullable column, which results in more overhead when queried. Also, setting the column to nullable makes the encoding complex because each time you access these columns, you must first check them.
I'm not saying that Nulls is the source of trouble, although some people think so. I think that if you allow "empty data" in your business rules, setting the column to nullable can sometimes work well, but if you use nullable in a situation like this, you're asking for it.
CustomerName1
CustomerAddress1
CustomerEmail1
CustomerName2
CustomerAddress2
CustomerEmail3
CustomerName1
CustomerAddress2
CustomerEmail3
If this is the case, you need to normalize your table.
18. Try not to use the text data type
Do not use text unless you use it to process a very large data. Because it is not easy to query, slow, poor use will also waste a lot of space. In general, varchar can better handle your data.
19, try not to use temporary tables
Try not to use temporary tables unless you have to do so. A subquery can generally be used in place of a temporary table. Using temporary tables can be costly, and if you are programming with COM +, it can also cause a lot of trouble because COM + uses database connection pooling and temporary tables exist all the while. SQL Server provides alternatives, such as the table data type.
20, learn to analyze the query
SQL Server Query Analyzer is your good companion through which you can understand how queries and indexes affect performance.
21. Use referential integrity
Defining the primary, uniqueness, and foreign keys can save a lot of time.

With limited conditions, we can adjust the SQL quality of the application:

1. Do not perform full table scans: Full table scan results in large number of I/O

2. Build and use good indexes as much as possible: indexing is also a good index, in the index, not the more the better, when the index of a table has reached more than 4, Oracle performance may not improve, because the OLTP system more than 5 indexes per table will degrade performance, and in one SQL, Oracle cannot use more than 5 indexes, and when we use Group by and order by, Oracle automatically sorts the data, and Oracle determines the size of the Sort_area_size area in Init.ora, when sorting cannot be When the sort area is complete, Oracle will sort on the disk, which is the sort of temporary table space we're talking about, and too much disk ordering will make the value of the free buffer waits higher, and this interval is not just for sorting, but for developers I would like to advise:

1), the subquery in the Select,update,delete statement should regularly find less than 20% of the table rows. If a statement looks for more than 20% of the total number of rows, it will not be able to gain performance gains by using indexes.

2), the index may be fragmented because the record is deleted from the table, and is also removed from the table's index. The space freed up by the table can be re-used, and the space that the index frees is no longer available. Indexed tables that are frequently deleted, should be periodically rebuilt to avoid space fragmentation in the index, Affect performance. Under licensing conditions, you can also periodically truncate tables, truncate commands to delete all records in a table, and also to remove index fragmentation.

3), when using the index must be referenced in the order of the corresponding fields of the index.

4), with (+) than with not in more efficient.

Reduce the competition of Oracle:

Let's start with a few Oracle parameters that relate to Oracle's competition:

1), Freelists and freelist groups: they are responsible for Oracle's spatial management of processing tables and indexes;

2), Pctfree and pctused: This parameter determines the behavior of the freelists and freelist groups, and the only purpose of the Pctfree and pctused parameters is to control how the block enters and exits the freelists

It is important to set the Pctfree and pctused to remove and read the block in the freelists.

  Settings for other parameters

1), including SGA (System global): The System global Area (SGA) is a segment of the control information memory that is assigned to Oracle's database containing an Oracle instance.

The database cache is primarily included,

Replay log cache (the redo log buffer),

Shared Pool,

Data dictionary caching (the dictionary cache) and other information

2), db_block_buffers (data buffer zone) access to the data are placed in this area of memory, the larger the parameter, Oracle in memory to find the same data is more likely, that is, faster query speed.

3), share_pool_size (SQL shared buffer pool): This parameter is the cache of the library cache and the data dictionary.

4), Log_buffer (replay log buffer)

5), sort_area_size (sorting area)

6), processes (number of simultaneous connected processes)

7), Db_block_size (Database block size): Oracle default block is 2KB, too small, because if we have a 8KB of data, then 2KB block database to read 4 times to read, and 8KB block database as long as 1 times read, greatly reduced I/O operations. After the database installation is complete, you can no longer change the value of db_block_size, you can only re-establish the database and build the library, you choose to install the database manually.

8), Open_links (number of simultaneous open links)

9), Dml_locks

10), Open_cursors (number of open cursors)

11), Dbwr_io_slaves (number of background write processes)

 

6. In and Exists

Sometimes a column is compared to a series of values. The simplest approach is to use subqueries in the WHERE clause. You can use a two-format subquery in the WHERE clause.

The first format is the use of the in operator:

... where column in (SELECT * from ...);

The second format is the use of the exist operator:

... where exists (select ' X ' from ...);

SQL Optimization | Java face question

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.