2014-08-23 BaoxinjianI. Summary
A script that is circulated online to query the efficiency of a single SQL query and export it as an HTML report, similar in function to Dbms_profilerQuery sql_id by session, only run script, export as HTML reportThe SQL script: Http://files.cnblogs.com/eastsea/sqlcheck.zipIi. cases-Using the script data SQL performance HTML
Tags: set composition and MAPR parameters JVM Shuff pressure javaThis document is documented in the process of data processing, encountered a SQL execution is very slow, for some large hive tables will also appear oom, step by step through parameter settings and SQL optimization, the process of tuning it. First on SQL
SQL Server Performance Tuning 3 Index MaintenancePreface
The previous article introduced how to improve database query performance by creating indexes, which is just the beginning. If you do not need proper maintenance in the future, the indexes you have previously created may even become a drag-and-drop attack and a helper for database performance degradation.Search for fragments
Removing Fragments may be
the article you learn SQL language experience, or probation book feel, we will elect 3 readers to give "SQL optimization core ideas" 1, quickly actively participate in it!
Event deadline: May 10, 2018
In the "async Community " Backstage reply " concern ", you can get free 2000 online video courses ; recommend friends follow the tips to get books link, free to get an asynchronous book. Come and join
.
Experiments have found that different indexes can be created on the master/slave database without interfering with each other (this is related to replication configuration). This makes it possible to create more optimized indexes based on the different usage patterns of the master and from the database. I saw on a foreign blog that using the dynamic view of SQL Server 2005, indexes are automatically created based on the usage patterns of t
whenever possible. This section will be a topic at the end of this series. A detailed description of the process is presented here (simplification – Exploration – implementation).Summarize:This paper introduces the process of compiling query, and the concrete content of generating cache, reusing cache, recompiling, etc. There are a number of things to note about optimizing our T-SQL statements, and the more hits we have on the execution plan cache, t
Tags: RIP technology format complete source IMA div disk format statusOriginal: SQL Tuning diary--the principle and troubleshooting of parallel waitsOverviewDealing with projects today, customer response database at a certain time period, the response is particularly slow. We need to provide some optimization suggestions.PhenomenonBecause it is a specific time period is slow, it is more convenient to troubl
MySQL Database SQL statement tuning 、Index Design principles:Index columns are generally columns in a WHERE clause or a column in a join sentenceTry not to index columns with small cardinality, such as the gender columnUse a short index whenever possible: If you try to specify the minimum length for the character column index.(Short Keys is Better,integer best)Create index CityName on the city (city (10));C
Label:Original: T-SQL performance Tuning-information collectionIO information (starting from server startup)--database IO Analysiswith iofordatabase as (SELECT db_name (vfs.database_id) as DatabaseName, case if Smf.type = 1 Then ' log_file ' ELSE ' data_file ' END As Databasefile_type, sum (vfs.num_of_bytes_written) as Io_write, sum (VFS.N Um_of_bytes_read) as Io_read, SUM (Vfs.num_of
Label:A recent project involves data-table queries of large data volumes. The total data table is about 700 million-2 billion, the primary key to establish a global unique index, the partition policy is hash partition + range partition, most of the time the query condition hit record more than million, a single return to the previous XX record. The following experiences are summed up in the tuning process:(1) Minimize the interval limit, even if the a
DB2 tuning SQL Execution Analysis
I have always had a misunderstanding that it is the same to not creating indexes for all the fields in the table. Therefore, when the data volume reaches 1 million in a practical application, the retrieval speed is obviously slow, and the CPU usage during query execution is very high, which affects other jobs and leads to chain reactions.After discussing with some friends,
ObjectiveIn the previous article, we analyzed how the query optimizer works, including: detailed operation steps of the query optimizer, analysis of filter conditions, optimization of index items, and other information.In this article we analyze the detection of several key indicator values in the course of our operation.These indicator values are analyzed to analyze the operation of the statement and to analyze how it is optimized.Through this article we can learn the
One of the practical skills that can greatly improve the performance of SQL TUNING optimization.
When we perform SQL optimization, we often encounter the need to sort a large number of datasets and then retrieve the first part of the results from the sorted set. In this case, when we write SQL statements according to t
large print segments18.count (*) is slightly faster than count (1), of course, if you can retrieve by index, the count of indexed columns is still the fastest. 19. Replace the HAVING clause with a WHERE clause to avoid having a HAVING clause, having The result set is filtered only after all records have been retrieved19. Avoid using like ' * ' to avoid using is null or20.Note:A, programmers pay attention to the amount of data in each table.B, the coding process and the unit test process as far
)
more efficient Paging (1)--where...in
5.093s
5.328s
5.14s
5.406s
5.297s
Efficient Paging--row_number () over
5.437s
5.39s
5.156s
5.016s
5.344s
5.269
5.253s
As you can see, using "Higher paging (1)-where...=" is the fastest way to page out when the number of rows in a query is small .
" query 50000-500 10 data "
first
second
I. Analysis Phase
In general, there are often too many areas of concern in the system analysis phase, system functionality, availability, reliability, security requirements tend to attract most of our attention, but we must note that performance is a very important non-functional requirements, must be based on the characteristics of the system to determine its real-time requirements, response time requirements , hardware configuration, and so on. It is best to have quantifiable indicators of va
rows reserved data index_size unused
TB_WCB 9439661 317208 kb 167168 KB 149872 KB 168 KB
*/ we found that the size before index compression was 329M. And after compression is 149M, the compression ratio is 45%. The effect is also very obvious. Summarize: Compression through tables and indexes. We can reduce the disk space occupied by the table, this is only part of it, and more importantly, reading the same amount of data, just need to read less data pages,
(m.shared_memory_committed_kb) as Sharedmemroycommittedkb,sum (M.shared_memory_ RESERVED_KB) as Sharedmemroyreservedkb,sum (m.multi_pages_kb) as Multipageskb,sum (m.single_pages_kb) as SinglePagesKB , SUM (m.multi_pages_kb) +sum (m.single_pages_kb) as Totalpageskbfrom sys.dm_os_memory_clerks MGROUP by M.typeORDER by TOTALPAGESKB DESCYou have sorted by memory usage to find the part that uses the most memory. Analyze the reason for use and resolve.CaseA customer's customer system is used slowly,
compression is 167M, the size of the table after compression is only 40% of the original table, the effect is obvious, and because most of the table's fields are only IDs, the relative repetition value is not too much.However, we see that the size of the index varies substantially, so we continue to compress the index:5. Compression indexAlter index idx_tb_wcb_id on Tb_wcbrebuildwith (Data_compression=row)6, the comparison after the index compressionsp_spaceused ' ms_visit_qst_opt '/*name
, according to explain optimization; b. When there is an order by A.col condition, all joins must be left joins, and Each join field creates an index, and the Where condition can only have a table condition, the data associated with the other table is formed into a large table in a, then the complete set of a is filtered; If you cannot use the left join entirely, you need to use it flexiblyStraight_join and other techniques, take the time sequence as an example: 1) Data warehousing Accordi
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.