10 threads operate at the same time, and frequent problems with inserting the same data occur. Although it is used when inserting data:
Insert Inti tablename (Fields ...)
Select @t1, @t2, @t3 from TableName where isn't exists (select ID from tablename
where [email protected],[email protected],[email protected])
It was not valid at the time of high concurrency. This statement is also included in the stored procedure. (Before the line is also tried to determine whether there is no record to see if the write, invalid).
Therefore, it is necessary to solve this kind of situation from the base of the database, which is the constraint. Otherwise the atomic operation of the database is not as thin as I need.
Add constraints to the command line with a few people, every time on the Internet to find SQL statements are exhausted, or write down the good.
The key to need is called field combination constraint uniqueness
ALTER TABLE
TableName Add CONSTRAINT newuniquename Unique (T1,T2,T3)
This guarantees that three combinations of fields are not duplicated
The adjustment of the production system database is really pinching irritate D ...
There is no good workaround for the duplication of database read operations, which is to read some entries of the database at the same time to modify one of these entries to 1, and then read the other process without repeating the read. But in the case of multithreading, even if I use SQL
The newest feature of SERVER 2005 is a method similar to update...output into to temporary tables:
Update TableName Set
Oncheck=1,lastlocktime=getdate (), Lastchecktime=getdate ()
Output deleted.id
Into @newtb
where ID in
(select ID from tablename where oncheck=0)
will still cause repeated readings. Isn't there a better way to do it?
If you have a better way, you can send it out.
1, SQL Server 2005 performance tools have SQL Server Profiler and Database Engine Tuning Advisor, excellent stuff, must be skilled in use.
2. Open "Show estimated execution plan" when querying SQL statement, analyze the situation of each step
3, the primary practice, when the high CPU occupancy rate, open the SQL
The Server Profiler runs, saves the running data to a file, and then opens the Database Engine Tuning Advisor call that file for analysis by SQL
The server provides index tuning recommendations. Adopt its Index index optimization section.
4, but the above practices often do not run out of your needs, in the recent optimization process of high CPU utilization, but not at all I need to make the optimization recommendations, especially some statements are stored in the process and multi-table. At this point, you need to use intermediate methods to locate high CPU-intensive statements.
5. Or run SQL Server Profiler to save the running results to a new table in a library (a name system will build itself). Let it run for a while, and then you can use
Select top [from Test] where TextData is isn't null ORDER by duration DESC
This can select a long running statement, in order by can be replaced by CPU, READS, to select the CPU takes longer and read too much data statement.
After locating the offending statement, you can analyze it in detail. Some statements in the execution plan clearly see the problem.
Common there is no build index or index establishment unreasonable, will appear table
Scan or index Scan, when you see a scan, means that you do a full-table or full-index scan, which is inevitably caused by too many reads. What we expect to see is seek or key lookup.
6, how to see the SQL statement execution of the plan is very fastidious, beginners will pay too much attention to the cost ratio shown inside, and in fact this sometimes misleading. I was found in the actual optimization process, an index
Scan has an execution cost of only 25%, the other key lookup costs 50%, and the key lookup section is not optimized, the SEEK predicate is id=xxx this is based on the primary key lookup. And careful analysis can be seen, the latter CPU overhead 0.00015,i/o overhead of 0.0013. And the former, CPU overhead 1.4xxxx,i/o overhead is much larger than the latter. Therefore, the priority of optimization should be put in the former.
7, how to optimize a single part, a complex SQL statement, SQL
The server will intelligently reorganize the where statement, trying to match the index. Select the step with optimization, select ' Properties ' next to it, and then select the ' predicate ' in which to copy the part, which is the decomposition of the Where
statement, and then select * FROM table in the query interface where
The "predicate" that was just copied. This is the part that needs to be optimized, since this is the step, most people should be able to manually index it, because the WHERE statement here is much simpler than the previous one. (In my project, the where part of the original SELECT statement has 10 conditional combinations, involving 6 fields, extracting the part to be optimized for 4 conditions, involving 3 fields.) After the new index is established, the CPU usage is reduced at a sudden, and the newly established index involves a field that is not frequently updated, and frequent read and write operations do not affect the efficiency of the update.
8, the above is the idea of optimization, and finally some optimization process or system design need to pay attention to the problem.
A, try to avoid using SELECT * from XXX where
ABC like
The '%xxx ' type of fuzzy query, because the% in front of the words is not available to the index, will inevitably cause a full-volume scan operation. You should find an alternative or use a precondition statement to minimize the number of rows before the like lookup.
B, try to avoid the large table data for select top n * from XXX where xxxx order BY
NEWID () takes a random record of the operation. The NEWID () operation reads the full amount of data and then sorts it. Also consumes a lot of CPU and read operations. Consider using the rand () function, which I'm still working on, which is good for the whole table operation, such as id>= (select
Max (ID) from table) *rand (). But if you take a random record of local data, you need to think about it.
C, in SQL Server
The profiler record will see audit
Logout can consume a lot of CPU and read and write operations. Some of the data is called a link that executes the total number of SQL statements generated during a connection, without worrying too much. Look down indeed seems like this, a lot of audit
The CPU and IO consumption of the logout is basically consistent with the previously optimized statements. So in the 5th I mention the SQL statement with TextData is not null condition put audit
Logout to Cain.
D, two different fields or statements result in a full table scan. For example where M=1 or
N=1. If you create an index that is M and N, it also causes scan, and the workaround is to index m and N separately. Test the table of 120,000 data, the index error is set up in the case of IO overhead
10.xxx, after the index is set, all become 0.003, the contrast is very huge. Although it can cause performance problems with the insert operation, most of the bottlenecks are in the read operation of SELECT.
E, index lookup (index seek) and Index Scan (index
Scan), we need the former, and the reason for the latter is usually that the field in an index is redundant to find, for example, the index is based on a and B two fields, and we just look for a, which causes the index
SCAN. It is recommended to index a separate A to form an index lookup.
F, for small tables do not recommend indexing, especially hundreds of of the amount of data, only tens of thousands of levels of data indexing can be effective.
Database optimization is a very deep learning, in the database design should be noted, especially the last mention of a, b two points, as far as possible in the design of the early avoidance.
Avoid field combination constraints for duplicate value writes in high-concurrency databases + SQL SERVER SQL Statement optimization Summary (GO)