Supported Locking Mechanism
Full page lock
Full-page locking is a new term and also a type of locking supported by ASEAdaptive Server Enterprise in the past. This type has the following features: locking all accessible pages at the page level;
When pages of various types change in any way, these excluded pages are locked, and this locking mechanism remains until the transaction ends;
When the next required page is successfully obtained, lock the released current access pages on the shared page (if layer-3 ANSI isolation is used, this locking mechanism is maintained until the transaction is terminated ). Timestamp is used for page-level time stamps) to determine whether a change has occurred. Detailed information is recorded in the transaction log for use in the forward or backward manner during system recovery.
This locking method often provides solutions with the highest performance, especially when these features are taken into account during application design. However, there are some application systems. In the event of some activities, this method of locking the entire page may have a significant impact on the system performance. This is especially true for application systems that are designed in general environments such as file systems or other database manufacturers that support the locking mechanism with smaller scales.
There are also a series of problems that need to work around more difficult conditions. They usually adopt solutions with more Sybase features. For commercial application software manufacturers, it is a challenge for them because it requires them to maintain their original code across the database platforms they support, this job has a considerable workload. The basic problems in this field are as follows:
There is a dispute over the final leaf pages of non-clustered indexes that have been created in ascending order. A deadlock may occur when inserting and querying non-clustered indexes; A deadlock may occur between updating indexes based on group clustering values and querying and accessing non-group clustered indexes; the last row of a table without an index may conflict (although partitions can be used for the last specific address ); potential conflicts may occur between tables with few rows (although the fill factor [fillfactors] and the maximum number of rows per page [max_rows_per_page] can be used for specific addresses ); locks on both sides of each page are often separated. If a table is very small and resident on a single page, therefore, accessing a single row will actually destroy the locking mechanism for the entire table.
Lock data only
The data locking mechanism is used only to solve the main problems that the previous section focuses on (Other issues will be addressed in other functional fields ). This locking method supports two different ways of working: Data row locking and data page locking. In both cases, their supported locking methods are different from the previous locking mechanism. Locking data only has the following features:
The transaction lock will not be damaged on the index page. On the contrary, a mechanism called lock is adopted. Locks are a synchronization method similar to the rotating locks. They have nothing to do with transactions and only keep for a short period of time. Generally, when a task physically changes a small piece of data in the database, this cycle is equivalent to changing the time for some bytes of data on a 2 k page in the shared storage area.) Once completed, this task will directly open the lock. In this case, it may also temporarily be the same as other block groups, because such lockstores cannot perform context switching on server tasks, nor involve deadlocks, and can only be kept for a short period of time, therefore, they cannot generate significant contention.
Use an RID to lock data rows in a single Row (the Row ID [RID ---- Row ID] is a combination of the logical page number and the Row number on the page); supports a fixed Row ID RIDs, it can be forward, allowing the data row to be moved without changing its RID. When a row becomes larger than its available space, the above results do not need to be changed for non-clustered indexes.
The insert operation can be performed at the end of the table without any contention. This function has been added ..
Supports range locking, the next keyword locking, and infinite large locks to lock logical range values.
Supports page separation caused by top-level operations. These situations are committed directly. system transactions can cause the split pages to be locked for a shorter period of time.
To support these changes, we need to make a series of improvements to the storage table structure. The main effects of these improvements are as follows:
Clustered indexes are now stored in the "placement Index" "placement indexes." method used by IBM DB2 products that many people are familiar. This structure is similar to a non-clustered index and requires a similar amount of space. The structure of this correction enables data to be stored across data pages in sequence during initial storage. However, when data is inserted, they must be stored as closely as possible so that there is no page split in the correct logical page. In addition, the data order on the data page is not maintained when new rows are added. The application of this index adds an I/O operation for each clustered index traveling.
The row displacement table has been added to the index page and data page. This addition and the new row index Row Storage Format have the potential to reduce the number of index entries stored on each index page.
Fixed Line id rids ). When a row moves, the forward address assigned to the new row is placed at the position where the row resides. When this kind of movement needs to change the non-clustered index, an I/O operation is required to access the row to get the 'forward 'position.
In general, the index will be smaller and shorter, because the Duplicate key is restricted by the Duplicate key in each leaf-level page.) For example, if the value is "GREEN" "GREEN") and the value is marked as RIDs in the next row) is 123-1,234-2, and the value of "GREEN" "GREEN" is stored in the rows of--2 and 345-3 respectively "), 123-1,234-2,345-3, instead of storing the value "GREEN" "GREEN,") three times. Each value is stored only once on each index page.
Compress the Suffix in a non-leaf node of the non-clustered index tree (for example, if the key value is "GREEN" and "HAMILTON", and split between the two values, store "G" and "H" on the non-page index page ").
Lock data pages and data rows
Only data locking is supported in two ways: Data Page locking and data row locking. These are similar to how they work and provide functions. These two methods differ in the locking scale only when they impede data access. Using the data row locking method under the data page locking method has two effects: one is positive and the other is reverse ). First, the use of a small-scale locking mechanism may reduce contention and conflict. However, when a large amount of data changes, there may be a lot of obstacles to the locking.
Specific lock types used
Unless the configuration parameters are specified, an implicit full-page locking mechanism is applied to all tables.
Sp_configure 'lock scheme ', [allpages | datapages | datarows]
When the database is dumped from the server of the original version and reloaded, all the tables are defined as tables locked on the entire page. When creating a new table, you can use the following syntax format instead of the default value:
Create table ;... Lock [allpages | datapages | datarows]
To change the lock type in a table used, the following syntax format can be used:
Alter table ; Lock [allpages | datapages | datarows]
Changing the locking method in an existing table will cause the following three actions:
First, if a table is changed from full-page locking to locking data only, or from locking data only to full-page locking, between the two types, You must select a table to allow storage format changes. If this is a partition table, it must be assumed that the necessary parallel level and working thread have been configured before execution.
Second, the clustered index in the table must be created again. Because we can guarantee data, If you convert from full-page locking to locking only data, this re-creation can be done through "with sorted_data. However, when the data lock-only mechanism is converted to the full-page locking method, parallel index creation is required. Note: If this is a partition table, you must configure the parallel level and number of worker threads to allow this change. Otherwise, this migration will fail)
Finally, non-clustered indexes will be rebuilt. If the server has been configured for parallel processing, this step will be used.
Because these activities are related to the potential workload, changing from the full page locking mechanism to only locking data or changing from only locking data to the full page locking mechanism may be time-consuming activities. To mark this, you have the following options:
If possible, configure the parallel mode. This is at least necessary for the has (hashed) method for executing non-clustered indexes. However, if possible, using partition tables and partition scanning will make the system more improved.
After you select to enter and create a clustered index, the task is set to checkpointed.) Therefore, if you have sufficient hardware resources, checkpoint tasks can have more than 10 system default values) for asynchronous I/O requests. dbcc tuning can bring beneficial results. ('Maxwriteder', number)
As a way to reduce the checkpoint cost, the high-speed buffer pool cache pool is used to mark the high degree of elimination in I/O operations with large data volumes, and allow the cleaning program to be as active as a housewife, it will increase the number of I/O operations for those pages whose checkpoints need to be refreshed from the high-speed buffer pool, and thus the time spent at the check point, can greatly reduce contributions.
If pre-configured, you can select parallel disks to use pre-allocated disks. Therefore, setting sp_configure number of pre-allocated extents to 16 will also significantly improve the system performance.
Note: you do not need to back up the data when changing only the data lock type, and it takes a short period of time to execute it.
(