Read about how to optimize sql query using execution plan, The latest news, videos, and discussion topics about how to optimize sql query using execution plan from alibabacloud.com
[Preface] mysql can record the SQL statements executed by users: it can record the files and tables. mysql can define the execution time or longer. The SQL statements are slow queries, record relevant information to the document and table [background description ].
[Preface] mysql can record the SQL statements executed
returned.
Note that the SQL pool is multi-threaded, so you must adopt a lock mechanism for SQL statements of public resources. The mutex lock is used here. When the business logic layer thread throws an SQL statement into the SQL pool, the SQL
tried to do this using the following SQL (for the sake of comparison, the columns of both tables are output):
SELECT
stu.name,
stu.class,
s.name,
s.score
FROM
student AS stu LEFT JOIN score AS s ON stu.name = s.name AND stu.class = ‘A‘
At first glance, there seems to be no problem, but also left the connection and on the condition of the increase in class restrictions, i
Today, a recursive query for SQL is used. A recursive query is a CTE statement with XX as (...). ) is implemented.If the table category data is as follows.We want to find the machine gun. This subcategory is extremely hierarchical (querying all levels of nodes through child nodes). The following is a query statementWit
Label:First, Introduction EF supports the open bottom of the ADO framework, DbContext has three common methods Dbset.sqlquery//Queries and returns entities
dbcontext.database.sqlquery Second, usage 1. Dbset.sqlquery usage var list = db.admins.SqlQuery ("SELECT * from admin");
foreach (var item in list)
{
Response.Write (item.username);
Response.Write (");
} 2. Dbcontext.database.sqlquery var list = db.
whose value is TRUE.
(1-J3)Add external rowIf outer join is specified (relative to cross join or inner join), no matching rows are found in the reserved table (preserved table) and added to the VT1-J2 as external rows to generate a VT1-J3.
(2)WHEREIn this phase, the rows in VT1 are filtered Based on the predicates (
(3)GROUPGroups the rows in VT2 according to the column Name List specified in the group by clause to generate VT3.
(4)HAVINGFilters groups in VT3 Based on the predicates (
(5)SELECT
, WHERE -A. NAME = ' Mike ' GROUP by, a.uid -G T Have , COUNT (b.oid) 7.LIMITThe LIMIT clause selects the specified row data starting at the specified location from the VT6 virtual table obtained in the previous step. Note: The positive and negative effects of offset and rows, when the offset is very high efficiency is very low, you can do this: the Sub-query optimization, in the subquery first obtained from the index to the maximum
)Calculation ExpressionCalculates the expression in the SELECT list to generate a VT5-1.
(5-2)DISTINCTRemoves duplicate rows from the VT5-1 and generates a VT5-2.
(5-3)TOPBased on the logical sorting defined by the ORDER BYi clause, from the VT5-2, select the rows that are specified in the previous count or percentage to generate the table VT5-3.
(6) order by sorts the rows in the VT5-3 according to the column Name List specified in the order by clause and generates the cursor VC6.
The above is
'--database--and st.text not like '% time% ' and st.text not '% @queryStr% '--query string--and qs.execution_count>100--Number of executions--and qs.total_worker_time>100--cpu Total time--and qs.total_physical_reads>100--Number of physical reads--and qs.total_logical_writes>100--Logical write times--and qs.total_logical_reads>100--Number of logical reads)SELECT *, ' most executed ' type from (select Top 5 * from QS ORDER BY execution_count Desc) A--m
Tags: SQL Server database queriesRecently has been studying the blog, in fact, is now a popular self-media, it is interesting that the netizens have written their own blog, the establishment of a small station, and now went to the platform for others to work free, but also enjoy themselves, do not know what the situation.Blog information:Topics: Learning topics related to reading notes.Web site: Use a. NET domain name as a URL, for example骆驼祥子好词好句http
whose value is TRUE.
(1-J3)Add external rowIf outer join is specified (relative to cross join or inner join), no matching rows are found in the reserved table (preserved table) and added to the VT1-J2 as external rows to generate a VT1-J3.
(2)WHEREIn this phase, the rows in VT1 are filtered Based on the predicates (
(3)GROUPGroups the rows in VT2 according to the column Name List specified in the group by clause to generate VT3.
(4)HAVINGFilters groups in VT3 Based on the predicates (
(5)SELECT
Use GROUPBY in SQL to group data of SELECT results. Before using GROUPBY, you need to know some important rules. The group by clause can contain any number of columns. That is to say, the group can be further grouped to provide more detailed control for the Data Group. If multiple groups are specified in the group by clause
Use group by in SQL to GROUP data of SE
Without third-party tools, view SQL Execution plans SQL> connect sys as sysdba -- create the table SQL used by the execution plan> @? \ Rdbms \ admin \ utlxplan grants the Autotrace permission to each user by
1.in when the query condition is ListSelectId="getmultimomentscommentscounts"Resulttype="int">SelectMoment_comment_count fromTbl_moment_commentcountwhereMidinchforeachitem="Item"index="Index"collection="List"open="("Separator=","Close=")">#{item}foreach> Select>.1 if the type of the parameter is list, then when used, the collection property must be specified as a list
select id= "findbyidsmap" resultmap="Baseresultmap">
Select
include refi
, using the LIMIT clause has no problem, and when the amount of data is very large, the use LIMIT n, m is very inefficient. Because the mechanism of limit is to scan from scratch every time, if need to start from the No. 600000 line, read 3 data, you need to scan to locate 600,000 rows before reading, and the scanning process is a very inefficient process. Therefore, for large data processing, it is very necessary to establish a certain caching mechan
forms of selection:LIMIT N, MIndicates that the M record is selected starting with the nth record. And many developers like to use this statement to solve paging problems. For small data, using the LIMIT clause has no problem, and when the amount of data is very large, the use LIMIT n, m is very inefficient. Because the mechanism of limit is to scan from scratch every time, if need to start from the No. 600000 line, read 3 data, you need to scan to l
read, and you can use TKPROF to generate a more readable file.Note Tkprof is a command-line tool for the Oracle band, not the Sqlplus command.Enter the D:\oracle\product\10.2.0\admin\orcl\udump directory on another command lineD:\oracle\product\10.2.0\admin\orcl\udump>tkprof ORCL_ORA_5732_MYSESSION.TRC Orcl_ora_5732_mysession.txtTkprof:release 10.2.0.1.0-production on Fri Sep 14 16:59:12 2012Copyright (c) 1982, 2005, Oracle. All rights reserved.Open the Orcl_ora_5732_mysession.txt file to see t
Label:We open the execution plan to see how efficient the SQL statement is, whether the index is used, etc. But the execution plan did not tell us the execution time, just read a code, you can calculate the
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.