Query the number of uncommitted transactions

Source: Internet
Author: User

Query the number of uncommitted transactions

Select @ trancount

The former was influenced by the MSSQL System Management Guide and thought that the lock table is in the wait status as the Status column,
In fact, if it is an uncommitted transaction, the process will not be wait after a while,
However, even if it is not wait, you can still see some information using sp_lock,
Locate the spid of the uncommitted transaction and execute the following statement
DBCC inputbuffer (spid), which can trace the SQL statements being executed by the process,
Generally, the use of nested transactions is not good, which may lead to uncommitted transactions,
It should also be determined based on sp_who2, Master... sysprocesses, etc,
Only by comprehensive judgment can we know where the problem is,
I have encountered this problem several times before, and the system runs slowly and slowly.
Execute sp_lock and there is no status = wait column,
But there are several spids for the activity, and then execute sp_who, sp_who2, view sysprocesses, DBCC inputbuffer
You can basically determine what statements are executed to lock the data,
View the problematic client (in sysprocesses ),
Then I asked the user of the machine what operations he had recently performed. He replied that the operation was xxx,
Then, you can create a test database, perform the same operations, and track the database to find the crux of the problem,

Very depressing,

There are two stored procedures, A and B. The stored procedure B can be called independently, and B has transactions,
A needs to call B, and a also has things, such:
Procedure
Begin Tran Ta
Exec B
Commit tran ta
End

Procedure B
Begin tran tb
Delete stb
If (@ rowcount = 0) begin
Rollback tran tb
Return
End
Commit tran tb
End

It is normal to call B separately,
When A calls B, if an error occurs while executing the rollback tran tb statement,
The message "cannot roll back tb" is displayed. No transaction or save point with this name is found"

Rollback in nested transactions has problems,

If (@ trancount = 1) begin -- 1 indicates the outermost transaction. if it is greater than 1, it indicates a nested transaction,
Rollback tran tb
End else begin
Commit tran tb -- the committed data will be rolled back to
End

1.1 row-level locks are targeted at rows. For example, in A transaction, process A executes an update statement: update student set name = 'xx' where id = 13 then the row-Level Lock will lock the records with id = 13 in the student table and prevent other processes from operating on it, the lock is unlocked only after the transaction is completed. For example, take SQL SERVER as an example. Open two query analyzers at the same time and write in the first query Analyzer: use northwindselect * from suppliers begin transaction update suppliers set CompanyName = 'xx' where SupplierID = 3 waitfor delay' 00: 00: 20' commit transaction written in the second query Analyzer: select * from suppliers, run the code in the first query analyzer, and then run Code. You can see that the first query analyzer has been running for about 20 seconds. The second query analyzer is also running for about 20 seconds, this indicates that the execution of select * from suppliers is waiting. If the code in the first query analyzer is not run, the code in the second query analyzer is directly run, you can see the result almost without waiting. modify the code of the second query analyzer to select * from suppliers where SupplierID <> 3. Then run the code of the second query analyzer, run the code of the second query analyzer and you can see that the result is immediately displayed when the second query analyzer is run. If no wait, modify the code: select * from suppliers where SupplierID = 3 repeat the previous operation. You can see that you need to wait. wait about 20 seconds before you can see the result. This clearly tells us, row-level locks lock the modified rows so that other processes cannot operate those rows. Only after the transaction is complete You can operate without modifying the rows. Other processes can perform any operation without any restriction. transaction 1.2 page-Level Lock (1) first understand the concept of page. When creating a table in SQL SERVER, if the field size is large, the following message is often prompted: the maximum row size allowed in the table is 8060. For example, if you create a new table in SQL Server 5000: create table Test (Fld1 char (5000), Fld2 char (), a message indicating an error occurred while creating the table. The cause is as follows: "An error occurred while creating the 'test' table because the row size will be 10021 (including internal overhead) and the value exceeds the maximum size of 8060 rows allowed in the table." Why does it limit the row size to 8060? The size of a page in SQLSERVER2000 is 8 KB, which includes 96-byte page header, 36-byte other information, and 8060-byte data zone, data is stored in the 8060-byte data zone. How many rows can be stored on one page? This depends on the row size. For example, if the row size is 2000, four rows can be stored on one page. Note that the row size does not include text and Image Fields, for example, if the row size of the database northwind MERs table is 298, 27 rows can be stored on one page to check the row size calculation problem: use northwind alter table customers add xx char (8000) run the result with a warning: ------------------------------------- warning: The table 'customer' has been created, but its maximum size (8578) exceeds the maximum number of bytes per line (8060 ). If the length of the result row exceeds 8060 bytes, the INSERT or UPDATE operation of the row in this table will fail --------------------------------- it prompts that the row size is 8578, then the row size of the original MERs Mers is 578, but why can I add up to 268 of the size of each field? There are two reasons: on the one hand, the database uses two bytes to store a nvarchar character nchar, the customers field types are nchar and nvarchar, so the actual size is 268*2 = 536. What about 42? 42 is other table overhead. From the relationship between rows and pages, we can see that the smaller the row size, the more lines a page can store. When you query a database, you can read another page from one page, it is much slower than reading a single page, so we need to reduce the number of cross-page reads. Compare the following two statements: create table x1 (a char (5000), B char (5000 )) create table x2 (a varchar (5000), B char (5000) running result: -------------------------------------- server: Message 1701, level 16, status 2, an error occurred while creating table 'x1' in Row 2 because the row size will be 10021 (including internal overhead), and the value exceeds the maximum size of 8060 rows allowed in the table. Warning the table 'x2 'has been created, but its maximum row size (10023) exceeds the maximum number of bytes per line (8060 ). If the length of the result row exceeds 8060 bytes, INSERT or UPDATE of the row in this table will fail. ---------------------------------- X1 creation failed, x2 creation successful, but there is a warning, why? This compares the differences between char and varchar. When x1 is created, the maximum row size of 10023 is the actual row size, because char is fixed length and the size is always 10023, while x2 is different, varchar is variable-length. Although the maximum row size is 10023, the actual row size is not necessarily the same. The actual row size varies with the value of field a. Therefore, the number of rows that can be stored on each page. For a fixed length, you can determine the number of rows that can be stored during table creation. For a variable length, you must determine the number based on the data in the table. Of course, when the SQL SERVER stores records, some questions will be taken into account for page selection. This is not entirely the case. Let's look at the Northwind database's MERs table. The MERs primary key field is CustomerID, primary keys are clustered indexes, and the order of primary keys represents the actual storage order of rows. For example, if you insert a record to mers MERs: insert into MERs (mermerid, CompanyName) values ('cvcvc ', 'F F ') then use select * from MERs to view the data. You can see that the newly inserted record is automatically placed after CustomerId equals CONSH, it looks like the data found by select * from MERs order by customerId. The clustered index is like this. The physical storage order of records is the same as that of the clustered index, 1.2 page lock (2) Check the MERs table, open three query analyzers, and write in the first table: begin transaction update Customers with (PagLock) set Address = Address where customerId = 'alfki 'waitfor delay' 00: 00: 30' commit transaction in the second query Analyzer: select * from MERs where customerId = 'gre 'Al' writes in the third query Analyzer: select * from MERs where customerId = 'grosr' first runs the first query analyzer, then runs the second one, and then runs the third one. You can see that, the first and second query analyzers waited for 20 seconds, while the third query analyzer did not wait to immediately display the running results. I updated 'alfki' because it was a page lock, therefore, it locks a page of data from 'alfki' to 'greal'. This also indicates that, A total of 34 lines between 'alfki' and 'greal' belong to the same page. You can replace 'alfki' of the first query analyzer with 'dracd ', you can see that the running results are the same. If you change to 'hanar ', the results change to the first and third query analyzer waiting, while the second query analyzer does not have to wait, because 'hanar 'and 'grosr' belong to the same page, while 'greal' in other pages, the concept of page lock is relatively simple, but the concept of page is relatively complex, the page is in S The internal management of the QLSERVER is invisible to the user. The pages are abstract. for variable-length data types, page allocation changes with data changes. For more information, see database related materials. 1.3 The table lock is written in the first query Analyzer: begin transaction tran1 update customers with (TabLock) set City = City where CustomerId = 'alfki 'waitfor delay' 00: 00: 20 'commit transaction tran1 is written in the second query Analyzer: select * from MERs where customerId = 'wolza 'first runs the first query analyzer and then runs the second one. Both query analyzers are waiting. note that customerId = 'wolza 'is the last record of the Table 1.4 blocking the previous example of a transaction not committed, causing other transactions to wait. This is blocking. You can use sp_lock to view the blocking, open three query analyzer, the first write: begin transaction tran1 update products set produ CtName = productName + 'A' where ProductId = 1 waitfor delay '00: 00: 30 'commit transaction tran1 second write: select * from products third write: sp_lock runs the first, second, and third analyzer in sequence, and then checks the third analyzer to check whether there are rows with Status = 'wait, for example, here I check that there is such a line: 53 6 117575457 1 KEY (010086470766) S WAIT where ObjId = 117575457 and then run: use northwind select object_name (117575457) we can see that the corresponding table is a deadlock of Products1.5 and two query analyzers are opened at the same time. The first write is begin transaction tran2 update products se. T productName = productName + 'A' where ProductId = 2 waitfor delay' 00: 00: 10 'Update products set productName = productName + 'A' where ProductId = 1 commit transaction tran2 second write: begin transaction tran1 update products set productName = productName + 'A' where ProductId = 1 waitfor delay' 00: 00: 10 'Update products set productName = productName + 'A' where ProductId = 2 commit transaction tran1 run the first one, then run the second one, and then wait for them to run. Check the running result for more than 10 seconds. An error is displayed. The error message is displayed: SERVER: Message 1205, Level 13, status 50, line 1 open the help by pressing F1 in the query analyzer, select the Index tab in the help box, and enter 1205. Check carefully how the help document describes the 1205 error. Why is the deadlock? Let's take a look at the execution process. For simplicity, I abbreviated productId as id. First, analyzer 1 updates the record with id = 2 and locks the record with id = 2, no other process can operate the record with id = 2. Then analyzer 2 updates the record with id = 1 and locks the record with id = 1, similarly, other processes cannot operate records with id = 1, and analyzer 1 updates records with id = 1. Because records with id = 1 are locked by analyzer 2, they must wait, analyzer 1 is blocked. Similarly, analyzer 2 updates the records with id = 2. Because the records with id = 2 are locked by analyzer 1, you have to wait, analyzer 2 is blocked and both analyzer have to wait for each other, so there is a deadlock, which cannot be executed. Of course, to solve the deadlock problem, SQLSERVER2000 will kill one of the processes to end the deadlock. 1.6 read occupation refers to the ability to read data not submitted by other processes. Open two query analyzers and write the first one: begin transaction tran1 update products set productName = productName + 'C' where ProductId = 1 waitfor delay '00: 00: 15' commit transaction tran1 second write: set transaction isolation level read uncommitted select * from products where ProductId = 1 run the first and second tasks in sequence. You can see that the first task is waiting, but the second task does not have to wait, because I set the isolation level to read uncommitted in the second, that is, to allow reading of uncommitted data from other transactions. You can check the second running result and find the products column, the products column has been modified. If you modify the second query analyzer code to set transaction isolation level read committed select * from products where ProductId = 1 and run the same operation, the second query analyzer will also wait, because the isolation level is read committed, only the submitted data can be read, and uncommitted changes cannot be read. This prevents reading. In SQLSERVER2000, read committed is used by default, the occupied read is also called dirty read. Dirty read is the data that has been modified but not submitted. It also has the concept of dirty read in the text editor, when two identical queries are executed in a transaction that has been modified but not saved, the query results are different. This means that the two query analyzers are opened, first write: use northwind set transaction isolation level read committed begin transaction Tran1 select * from region where regionId = 3 waitfor delay '00: 00: 10' select * from region where regionId = 3 commit transaction tran1 second write: use northwind update region set regionDescription = 'xx' where regionId = 3 run the first and second analyzer in sequence. The first analyzer waits for 10 seconds, and the second one does not have to wait for immediate results, the first analyzer runs as follows: 3 Northern 3 xx The values read twice are different. Modify the first query analyzer code: use northwind set transaction isolation level repeatable read begin transaction tran1 select * From region where regionId = 3 waitfor delay '00: 00: 10' select * from region where regionId = 3 commit transaction tran1 the second one is changed: use northwind update region set regionDescription = 'yy' where regionId = 3 run the first and second parts in sequence. The first and second parts are waiting. The first analyzer runs as follows: 3 xx 3 xx to see the difference between the two. For the first time, I set the isolation level to read committed, for the second time, I set it to repeatable read, and repeatable read will lock the read data, read committed locks the modified data, and repeatable read locks insert, update, delete, and sel The data read committed of the ect operation only locks insert, update, and delete, but does not lock the data in the select query. The 1.8 Phantom reads the data and opens two query analyzers. The first write: use northwind set transaction isolation level repeatable read begin transaction tran1 select * from region waitfor delay '00: 00: 10' select * from region commit transaction tran1 second write: use northwind insert into region values (5, 'xx') to run the first and second analyzers in sequence. The first analyzer waits for 10 seconds, and the second one does not have to wait for immediate results, the first analyzer runs as follows: 1 Eastern 2 Western 3 Northern 4 Southern1 Eastern 2 Western 3 Northern 4 Southern 5 xx compare the results of the two queries. The second query has one more row. modify the code of the first analyzer: use northwind set transaction isolation level serializable begin transaction tran1 select * from region waitfor delay '00: 00: 10' select * from region commit transaction tran1 modify the second analyzer code: use northwind insert into region values (6, 'yy') and then run the first and second analyzers in sequence. You can see that both analyzers have to wait. The first running result is: the number of rows returned by the two queries is the same. Understanding reading and reading, non-repeated reading, and phantom reading should be understood from how the database operates to avoid them. If we only understand the concept, the concept is often abstract and obscure, in addition, the concept often only involves one aspect. It is necessary to figure out how to avoid reading, non-repeated reading, and phantom reading at various levels. Read uncommitted does not set a lock, read commmitted locks update, insert, and delete repeatable read, and locks update, insert, delete, and select seriablizable. The differences between read commmitted and seriablizable are as follows: when repeatable read is locked, other transactions cannot update or delete the locked data, but other transactions can be inserted. When seriablizable is locked, other transactions cannot update, delete, and insert. Description: repeatable read and seriablizable use a range to lock select rows. They are mainly restricted by the where statement and restricted by the row, page, and table locks, for update, insert, dele The te lock range is clear, and the repeatable read isolation level is also clear about the select lock, while the seriablizable locks the select, when other transactions insert, when cannot insert? How can this scope be determined? Because the concept of lock is often aimed at existing data, and the data inserted by insert is not in the original table, but not in the original table, how can we lock it? For example, set transaction isolation level seriablizable select * from region locks all rows in the table. For example, if there are four rows at the beginning, the four rows are locked. What about insert rows? Here there is a range, that is the select range, which determines whether the insert row is within the locked range, for example: use northwind set transaction isolation level serializable begin transaction tran1 select * from region where regionId> 6 waitfor delay '00: 00: 10' commit transaction tran1 if other transaction insert records: insert into region values (5, 'yy') because the inserted record regionId = 5, but 6 <5, does not meet the locking Conditions Therefore, this insert operation is allowed. If the insert record of another transaction: insert into region values (7, 'yy') is 7> 6, the insert operation is blocked because the conditions are met, it is hard to say which records can be inserted only after the lock is released. It is not clear how the lock is used as the internal management of SQLSERVER, I have tried to block records that do not meet certain conditions, but it is clear that the records that meet the conditions must be blocked. 1.9 isolation level has been mentioned earlier, there are four isolation levels: read uncommitted read committed repeatable read serializableread committed is the default isolation level, which is useful to a single user. For example, if you set the isolation level to serializable, it is only useful to you and does not work for other users. After the isolation level is set, it remains valid until the user exits. The lock is mainly used to ensure data consistency, read un Committed will not place locks on the read data, so it runs at the fastest speed and will not cause blocking and deadlock. However, due to data consistency problems, it is often not used, of course, we can use other technologies, such as adding rowverision columns, to ensure data consistency. However, for complex operations, we also choose transaction security. I have learned some lessons from transactions, for example, when I save data in VB, for example, begin transaction declare @ id as int insert into (...) values (...) select @ id = max (id) from t1 insert into B (AId ...) values (@ id ...) in commit transation, the id field of Table A is automatically numbered. I first insert A record in Table A, and then insert the Id of the newly inserted record in Table A to table B, you must ensure that @ id is the id generated by the previous insert statement. However, during the test, because many computers simultaneously record the @ id Why are the IDS inconsistent? Have I added a transaction? In this example, two query analyzers are opened, and the first one is written: use northwind set transaction isolation level serializable begin transaction tran1 insert into region values (10, 'A') waitfor delay' 00: 00: 10 'select max (regionId) from region commit transaction tran1 second write: insert into region values (11, 'A') execute the first and second analyzer in sequence, originally I expected 10 to be queried in the first analyzer, but the result is 11 here. The only method is to specify the table lock, such as insert into region with (TabLock) values (10, 'A') can ensure data consistency only when the table lock is specified so that other transactions cannot operate on it. Of course, if it is an automatic number, you can use @ identity to obtain the generated idnumber, for example, insert into orders (CustomerId) values ('alfki ') print @ identity this technique may be ineffective in other database drivers, and transactions are a supplement to the General Support (Table Source: SQL SERVER7.0 System Management Guide) isolation-level blocking risks prevent reading occupation prevent repeated reading prevent phantom read uncommitted low NO read committed low Yes NO repeatable read high Yes NO serializable maximum Yes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.