The basic theory of lock used in all kinds of large database is consistent, but it is different in concrete realization. SQL Server is more focused on managing locks by the system. When the user has the SQL request, the system analyzes the request, automatically satisfies the locking condition and the system performance to add the appropriate lock to the database, simultaneously the systems often automatically optimizes the processing during the operation, implements the dynamic lock. For the general users, through the system of automatic locking management mechanism can meet the basic requirements, but if the data security, database integrity and consistency have special requirements, you need to understand the SQL Server lock mechanism, grasp the database locking method.
Lock of database
Lock is a very important concept in database, it is mainly used to guarantee database integrality and consistency under multi-user environment. We know that data inconsistency occurs when multiple users can manipulate data in the same database at the same time. That is, if there are no locks and multiple users access a database at the same time, problems may occur when their transactions use the same data at the same time. These issues include missing updates, dirty reads, non-repeatable reads, and hallucination reads:
1. A loss update problem occurs when two or more transactions select the same row and then update the row based on the value originally selected. Every transaction has no knowledge of the existence of other transactions. The final update overrides updates made by other transactions, which results in data loss. For example, two editors made electronic copies of the same document. Each editor changes its copy independently, and then saves the changed copy, overwriting the original document. The editor who finally saved its change copy overwrites the changes made by the first editor. This problem can be avoided if the second editor is able to make changes after the first editor finishes.
2. Dirty reading means that when a transaction is accessing the data and the data has been modified and the modification has not been submitted to the database, another transaction accesses the data and then uses the data. Because this data is not yet submitted data, then another transaction read the data is dirty data, the operation based on dirty data may be incorrect. For example, an editor is changing an electronic document. During the change process, another editor copies the document (which contains all the changes made so far) and sends it to the intended user. After that, the first editor decided that the current changes were wrong, and then deleted the edits and saved the document. The document that is distributed to the user contains edits that no longer exist, and those edits should be considered never existed. This problem can be avoided if no one is able to read the changed document until the first editor determines the final change.