This should start with ODS (Open Data Service) "open data service. It is mainly responsible for managing connections; the thread service of SQL and sending result sets, Status values, and messages to the customer. The result set is transmitted using the table data stream (TDS). In addition to the required data, the result set also contains the description information, such as the column name, type, and communication token. Therefore, this is why I think that when only one record is returned in the stored procedure, using the output parameter can reduce the number of network bytes, because it lacks a lot of other information to describe.
The network packet size option is available in database configuration options. The default value is 4096 bytes. This size determines the size of each output buffer. For each customer request, SQL Server creates two input buffers for it (one for receiving commands and the other for executing the cancel command, and the size is unknown, or cannot be configured at all) and an output buffer. The condition for sending the result set is that the output buffer is full or the batch processing command ends (all the commands before a go are a batch ).
Therefore, when your query returns a large number of records, the output buffer needs to perform many handshakes with the customer. When each output buffer is full, this buffer can continue to be used only after these results are sent to the customer. If the network is limited, the round-trip will take a long time. In this waiting period, the SELECT command you issued requires a shared lock (s) for the selected records. If you only share the lock on the record, the impact may be slightly smaller. However, sqlserver may upgrade the record lock to a page-level shared lock because you have selected too many records. In the worst case, the SELECT statement you executed has no index, the table is scanned, and the entire table is locked with a share lock. In any case, the is lock exists. In this case, if you want to modify any row of records in the table, you must first obtain the exclusive lock (x). Because s and is are incompatible with X, the modification operation is blocked. The modification can only be made after you send all the results to the client.
Someone may ask, is SQL server so intelligent that it releases the shared lock on the changed page after scanning a page? I'm not sure about this either. However, even if it is smart enough, when scanning a page, the shared lock of the page is released, and there will also be an intent shared lock. The intention lock is mainly used to quickly determine whether a lock request has a conflict. To observe the lock, run the following script:
Use adventureworks;
Go
SET transaction isolation level Repeatable read
Begin tran
Select * from sales. salesorderheader with (rowlock) Where salesorderid = 43860
Select * From SYS. dm_tran_locks where request_session_id =@@ spid
Rollback tran
DBCC traceon (3604)
Go
-- Check whether the page of the page lock contains information. The key value is the result of a rare error.
DBCC page (adventureworks, 1,8653, 3)
Go
From the above results, we can see that in addition to the key lock and page lock, there is also an object type lock, according to resource_associated_entity_id is worth knowing that it is the corresponding salesorderheader table. The Database Type lock is because you open a connection. Each connection is opened with a database-level shared lock, the purpose of this lock is to prevent the database from being deleted or restored when there is a connection.
Some may think about how to obtain the exclusive lock in subsequent update operations since the table now has an is lock? Internally, it queues these operation requests to prevent the so-called "lock hunger" phenomenon. That is, the lock request determines whether a conflict is generated according to the queue within a certain time range.
Therefore, even if sqlserver is intelligent enough, because the object's is lock exists, all modifications may still be blocked.
If your batch command contains many SQL statements. If the result set of the first SQL statement is small and cannot fill the output buffer, it can only be sent to the client after the execution of the batch command is complete or the buffer is full. So should we use this statement as a batch command to send the result set back to the client as soon as possible? I think this is the same as going to the supermarket to buy things: I want to buy a dress and a TV. It is impossible to get the clothes back first and then buy a TV. In this way, the total time overhead will be more physical than a single purchase. Therefore, it is difficult to estimate the size of the network packet size option. Only when you try to return a small number of records in your query can it affect others' operations.
Therefore, even if your query is fast enough, there will be no shared locks for a long time in the Table. Because of the transfer, if the result set is large, it will still affect the concurrent operations.