Because a primary key can uniquely identify a row of records, you can ensure that no confusedly errors occur when you perform data updates and deletions. Of course, other fields can help us eliminate sharing conflicts when we do these things, but that's not what we're talking about here. primary keys, in addition to the above, often form referential integrity constraints with foreign keys to prevent data inconsistency. Therefore, the database in the design, the primary key plays a very important role.
Common database primary Key selection methods are:
Auto Grow Field
Manually Grow Fields
uniqueidentifier
"Comb (Combine)" type
Auto-growth field
Many database designers prefer to use an automatic growth field because it is simple to use. Auto-grow fields allow us to add data to the database, regardless of the primary key value, after the record is inserted, the database system will automatically assign a value to ensure that there is absolutely no duplication. If you use a SQL Server database, we can also use the @ @IDENTITY global variable to get the primary key value of the system allocation after the record is inserted.
While the automatic growth field saves us a lot of tedious work, there is a potential problem with it, which is that in data buffering mode, it is difficult to fill in the value of the primary key and the foreign key in advance. Suppose there are two tables:
Order (OrderID, OrderDate)
Orderdetial (OrderID, LineNum, ProductID, Price)
OrderID in the Order table are fields that grow automatically. Now we need to enter an order, including inserting a record in the Orders table and inserting several records into the OrderDetail table. Because the OrderID in the Order table is a field that is automatically growing, we cannot know the value of the record before it is formally inserted into the database, and only after the update can we see what value the database assigns to it. This can cause the following contradictions to occur:
First, in order to add the correct value to the OrderDetail OrderID field, you must first update the order table to get the OrderID value that the system assigns to it, and then populate the OrderDetail table with this OrderID. Finally, update the Oderdetail table. However, to ensure data consistency, order and OrderDetail must be updated at the same time under transaction protection, ensuring that both tables are more successful at the same time. Obviously, they are contradictory to each other. (Errors are stated here.) Lu Zhenyu 2005-6-15)
"Supplementary 2005-6-15"---------------------------------------------
Listen to Tang. NET indicates that when a primary file is committed in a transaction, it can be taken to the generated value through the @ @IDENTITY, so it can be passed to the detail when the foreign key is used, and the master record is rolled back and canceled when the transaction has an error rollback.
Lu Zhenyu added: Using the automatic growth field will increase the roundtrip of the network. Although you can use the @ @IDENTITY to get the value of the primary key, in the update process, you have to increase the data round-trip (in the C/s structure as an example):
1. Client sends start transaction command
2, the client submits the main table update
3. The server returns @ @IDENTITY
4, the client based on the return of the primary key update from the table buffer
5, the client will submit the server update from the table
6. Client COMMIT Transaction
One more round trip here will increase the transaction time. Reduce concurrency performance.
If you do not use an automatic growth field, the following scenario will be:
1. Client sends start transaction command
2, the client submits the main table update
3, client submission from the table update
4. Client COMMIT Transaction
So I'm not in favor of using an automatic growth field as a link between a primary key and a foreign key.
------------------------------------------------
In addition, when we need to replicate data between multiple databases (SQL Server's data distribution, subscription mechanism allows us to replicate data between libraries), auto-growing fields can cause primary key conflicts when data is merged. When you imagine that an order table in a database replicates a database to an OrderID in another library, should it automatically grow?
Ado. NET allows us to set a field to an automatic growth field in the dataset, but remember that the automatic growth field is just a placeholder, and when the database is updated, the value generated by the database automatically replaces the value assigned by Ado.net. So in order to prevent users from misunderstanding, we recommend that you set the auto growth initial value and increment in Ado.net to 1. Furthermore In Ado.net, we can create DataRelation for two tables, when two table updates of a cascading relationship exist, the value of the corresponding key in the other table after one table is updated will also change automatically, which can greatly reduce the problem of automatically growing fields when updating between two tables with cascading relationships.
Two, manual growth of the field
Now that the automatic growth field can be such a hassle, we might consider using a manually-grown field, which means that the value of the primary key needs to be maintained on its own, usually by creating a separate table to store the current primary key value. In the example above, we create a new table called Intkey, which contains two fields, KeyName and keyvalue. Just like a Hashtable, give a keyname, you can know what the current keyvalue is, and then manually implement key value data increment. In SQL Server, you can write a stored procedure to automate the process of taking a key value. The code is as follows:
Copy Code code as follows:
CREATE PROCEDURE [Getkey]
@KeyName Char (10),
@KeyValue int OUTPUT
As
UPDATE intkey SET @KeyValue = keyvalue = keyvalue + 1 WHERE keyname = @KeyName
Go
In this way, by invoking the stored procedure, we can get the latest key value to ensure that no duplication occurs. If you set the OrderID field to a manual growth field, our program can be implemented by calling the stored procedure first, obtaining a OrderID, and then using this OrderID to populate the order table with the OrderDetail table, and then update the two tables under transaction protection.
When you use a manual growth field as a primary key to replicate data between databases, you can ensure that key value conflicts do not occur during data consolidation, as long as we assign different primary key value segments to different databases. However, the use of manual growth fields increases the roundtrip of the network, and we must obtain the current primary key value by adding a database access, which increases the load on the network and the database, which can be a big disadvantage when in a low or disconnected network environment. At the same time, the manual maintenance of the primary key also consider the concurrency of various factors such as conflict, which will increase the complexity of the system.
Iii. Use of uniqueidentifier
SQL Server provides us with a uniqueidentifier data type and provides a build function newid () that uses NEWID () to generate a unique uniqueidentifier. UniqueIdentifier occupies 16 bytes in the database and the probability of duplication is so small that it can be considered to be 0. We often see the same from the registry
{45F0EB02-0727-4F2E-AAB5-E8AEDEE0CEC5}
The thing is actually a uniqueidentifier,windows use it to make COM components and interface identification to prevent duplication. In the. NET tube uniqueidentifier is called a GUID (Global Unique Identifier). You can use the following command to generate a GUID in C #:
Guid u = System.Guid.NewGuid ();
For the above mentioned order and OrderDetail procedures, if you choose uniqueidentifier as the primary key, we can avoid the above mentioned to increase the network roundtrip problem. Use the program to generate the GUID directly to populate the primary key, regardless of whether there will be duplicates.
The uniqueidentifier field also has a serious flaw: first, it is 16 bytes long, 4 times times the length of an integer, and consumes a lot of storage space. More seriously, uniqueidentifier generation has no regularity, and it is a time-consuming operation to build an index on it (most databases have indexes on the primary key). Someone has done experiments, inserting the same amount of data, using uniqueidentifier data is slower than the use of integer data, so, for efficiency reasons, avoid using uniqueidentifier database as primary key value.
Iv. Use the "Comb (Combine)" type
Since the above three kinds of primary key type selection strategy have their own shortcomings, then in the end there is no good way to solve it? The answer is yes. By using the comb type (there is no comb type in the database, which is designed by Jimmy Nilsson in the article "The cost of GUIDs as Primary Keys"), a good balance can be found between the three.
The basic design idea of the comb data type is this: since uniqueidentifier data is inefficient because of the lack of regularity, which affects the performance of the system, can we keep the first 10 bytes of uniqueidentifier by the way of combination, The latter 6 bytes represent the time of the GUID generation (DateTime), so that we combine the time information with the uniqueidentifier to increase the indexing efficiency while preserving the uniqueness of the uniqueidentifier. Some people may worry that uniqueidentifier reduced to 10 bytes will result in duplication of data, in fact, do not worry, the 6-byte time precision can reach 1/300 seconds, two comb type data exactly the same probability is in this 1/ It is almost impossible that the first 10 bytes of the two GUIDs generated in 300 seconds are exactly the same. Using SQL commands in SQL Server to implement this idea is:
Copy Code code as follows:
DECLARE @aGuid uniqueidentifier
SET @aGuid = cast (CAST (NEWID () as BINARY (10))
+ CAST (GETDATE () as BINARY (6)) as uniqueidentifier)
After testing, the use of comb key than the use of int master key, in the search, insert, UPDATE, Delete, and other operations are still slow, but more than the type of unidentifier faster. About the test data can refer to my July 21, 2004 essay.
In addition to using stored procedures to implement comb data, we can also use C # to generate comb data so that all primary key generation work can be done on the client. The C # code is as follows:
Copy Code code as follows:
//================================================================
<summary>
Return GUID for database operations, specific time code can improve retrieval efficiency
</summary>
<returns>comb (GUID-time mixed) type GUID data </returns>
public static Guid Newcomb ()
{
byte[] Guidarray = System.Guid.NewGuid (). Tobytearray ();
DateTime basedate = new DateTime (1900,1,1);
DateTime now = DateTime.Now;
Get the days and milliseconds which'll be used to build the byte string
TimeSpan days = new TimeSpan (now. Ticks-basedate.ticks);
TimeSpan msecs = new TimeSpan (now. Ticks-(New DateTime (now). Year, now. Month, now. Day). Ticks));
Convert to a byte array
Note This SQL Server is accurate to 1/300th the a millisecond so we divide by 3.333333
byte[] Daysarray = Bitconverter.getbytes (days. Days);
byte[] Msecsarray = bitconverter.getbytes (Long) (msecs. totalmilliseconds/3.333333));
Reverse the bytes to match SQL Servers ordering
Array.reverse (Daysarray);
Array.reverse (Msecsarray);
Copy the bytes into the GUID
Array.copy (Daysarray, Daysarray.length-2, Guidarray, guidarray.length-6, 2);
Array.copy (Msecsarray, msecsarray.length-4, Guidarray, guidarray.length-4, 4);
return new System.Guid (Guidarray);
}
//================================================================
<summary>
Generate time information from a GUID returned by SQL SERVER
</summary>
<param name= "GUID" > Comb </param> containing time information
<returns> Time </returns>
public static DateTime Getdatefromcomb (System.Guid Guid)
{
DateTime basedate = new DateTime (1900,1,1);
byte[] Daysarray = new Byte[4];
byte[] Msecsarray = new Byte[4];
Byte[] Guidarray = GUID. Tobytearray ();
Copy the date parts of the GUID to the respective byte arrays.
Array.copy (Guidarray, Guidarray.length-6, Daysarray, 2, 2);
Array.copy (Guidarray, guidarray.length-4, Msecsarray, 0, 4);
Reverse the arrays to put them into the appropriate order
Array.reverse (Daysarray);
Array.reverse (Msecsarray);
Convert the bytes to INTs
int days = Bitconverter.toint32 (Daysarray, 0);
int msecs = Bitconverter.toint32 (msecsarray, 0);
DateTime date = Basedate.adddays (days);
Date = date. Addmilliseconds (msecs * 3.333333);
return date;
}
Conclusion
Database primary keys occupy an important place in the database. The primary key selection strategy determines whether the system is efficient and easy to use. This paper compares the advantages and disadvantages of four kinds of primary key selection strategies, and provides the corresponding code solution, hoping to be helpful to everyone.