1. Database Replication
SQL Server 2008 database replication synchronizes data between multiple servers through the publishing/subscription mechanism. We use this mechanism for Synchronous database backup. The synchronous backup here refers to the real-time data synchronization between the backup server and the master server. Normally, only the master database server is used, and the backup server is only used when the master server fails. It is a database backup solution that is better than file backup.
SQL Server replication is divided into the following types:
1. snapshot publishing:
The publishing server sends a snapshot of published data to the subscription server at a scheduled interval. Delete all the data in the corresponding table in the subscription database at intervals, and insert all the data in the corresponding table to the subscription database.
Using snapshot replication is the most appropriate:
1) Data is rarely changed.
2) data copies that are outdated with the Publishing Server are allowed within a period of time.
3) copy a small amount of data.
4) there will be a large number of changes in the short term.
Snapshot replication is the most suitable option when the data volume is large but few changes occur.. For example, if a sales organization maintains a product price list and these prices are completely updated once or twice a year at a fixed time, we recommend that you copy the complete data snapshot after the data is changed. For a given type of data, more frequent snapshots may also be suitable. For example, if a relatively small table is updated on the Publishing Server in one day but a certain lag time is acceptable, the changes can be transmitted at night in the form of snapshots.
The continuous overhead of snapshot replication on the Publishing Server is lower than that of transaction replication because incremental changes do not need to be tracked. However, if you want to copy a large dataset, a large amount of resources will be required to generate and apply snapshots. When evaluating whether to use snapshot replication, consider the size of the entire dataset and the frequency of data changes.
2. Transaction publishing:
After the subscription server receives the initial snapshot of published data, the publisher streams transactions to the subscription server.
Transaction replication usually starts from releasing database objects and data snapshots. After an initial snapshot is created, the data changes and architecture changes made on the Publishing Server are usually transmitted to the subscription server when the changes occur (almost in real time. Data changes will be applied to the subscription server according to the order they occur on the publisher and the transaction boundary. Therefore, transaction consistency can be ensured within the publication.
Transaction replication is applicable in the following situations:
1). Spread it to the subscription server in case of an incremental change.
2) When a change occurs on the publishing server and the change arrives at the subscription server, the application requires a short lag time between the two.
3) The application needs to access the intermediate data status. For example, if a row is changed five times, transaction replication allows the application to respond to each change (for example, to trigger a trigger), not just to respond to the final data change of the row.
4). The publishing server has a large number of insert, update, and delete activities.
5). The publisher or subscription server is not an SQL Server database (for example, Oracle ).
By default, the subscription server for transaction publishing should be considered read-only because the changes will not be transmitted back to the Publishing Server. However, transaction replication does provide the option to allow updates on the subscription server.
3. Publish a transaction with updatable subscription:
After the SQL Server subscriber receives the initial snapshot of published data, the publisher streams transactions to the subscriber. Transactions from the subscription server are applied to the Publishing Server.
4. Merge and publish:
After the subscription server receives the initial snapshot of published data, the publisher and subscription server can independently Update published data. Changes are periodically merged. Microsoft SQL Server compact edition can only subscribe to combined Publishing.
Similar to transaction replication, merged replication usually starts from the snapshot of the released database objects and data, and uses triggers to track subsequent data changes and architecture changes made on the Publishing Server and subscription server. When the subscription server connects to the network, It synchronizes with the Publishing Server and exchanges all the lines that have changed between the publishing server and the subscription server since the last synchronization.
Merge replication is usually used in the environment from the server to the client. Merge replication applies to the following situations:
1). Multiple subscription servers may update the same data at different times and spread the changes to the Publishing Server and other subscription servers.
2) The subscription server needs to receive data, change the data offline, and synchronize the changes with the publisher and other subscription servers in the future.
3). Each subscriber needs different data partitions.
4). Conflicts may occur. When a conflict occurs, you must be able to detect and resolve the conflict.
5). The application needs the final data change result instead of accessing the intermediate data status. For example, if the row on the subscription server is changed five times before the subscription server synchronizes with the publishing server, the row is changed only once on the Publishing Server to reflect the final data change (that is, the value of the fifth change ).
Merge replication allows different sites to work independently, and merge updates into a unified result in the future.. Because updates are performed on multiple nodes, the same data may be updated by the publishing server and multiple subscription servers. Therefore, conflicts may occur when merging updates. Merging and copying provides multiple methods to handle conflicts.
Disadvantages of replication:The table has a primary key, and the table structure cannot be changed in the future. If the architecture is stable, it will be troublesome if there are many tables.
Replication method and process:
Http://www.cnblogs.com/dudu/archive/2010/08/26/1808540.html
Http://www.cnblogs.com/killkill/archive/2009/07/17/1525733.html
Http://dufei.blog.51cto.com/382644/84645
Http://www.cnblogs.com/wangdong/archive/2008/10/24/1318740.html
Ii. database image:
Database image:
The advantage is that the system can automatically detect master server faults and automatically switch to the backup storage.
The disadvantage is that the configuration is complex and the data in the image database is invisible (in SQL Server Management studio, only the image database is in the image state, and no database operations can be performed, and the simplest query is not supported. Check whether the data in the image database is correct. You can only switch the image database to the primary database)
Compared with log transmission, database images are obviously higher. In the simplest form, it is actually similar to the working principle of log transfer, but the production server sends transactions to the backup storage more frequently, which means the update speed is much faster.
For database images, the Failover function also needs to be completed manually. However, you can add the third SQL Server called witness. Witness can be used as a common SQL Server, but it keeps an eye on the other two image servers. When the primary image fails, witness allows the second image to take over the operation, similar to an automatic failover.
During failover, any client transaction in progress will be restarted. However, due to the delay in this process, the backup storage cannot guarantee zero data loss.
Iii. Database Log transmission:
As the lowest form of high availability, log shipping is essentially an extension of the SQL Server replication function.
Allows solution providers to create multiple database copies. Log transmission can synchronously send secondary Database Log copies to SQL Server instances. The logs are then replayed on the secondary server to keep the database copy up-to-date.
Some solution providers use Log transmission as a way to overcome the disadvantages of database images. Database image is a good technique, but it only allows us to implement one database copy. Images can be performed in close to real-time mode, so database modifications can be quickly written to secondary databases. If the customer database is damaged or the database records are accidentally deleted, this may cause problems.
Log transmission has two main advantages. First, the solution provider can implement a latency so that logs will not be instantly replayed. This is important because logs can be intercepted before replay if there is a problem with the primary (or image) database, thus preventing the spread of the problem.
The second major advantage of log transmission is that it supports multiple database copies. Some organizations use Log transmission as a method to maintain database copies in the backup data center, which can prevent data loss in the primary data center.
Although log transmission is a complementary measure of database images, it is an independent technology that can be used independently without relying on the image technology.
Http://www.searchdatabase.com.cn/showcontent_11708.htm
Iv. Failover Cluster
Cluster technology is the most advanced form of Microsoft availability. It requires you to set up a Windows Cluster.
In a cluster, data transmission and images are not involved. Instead, two or more computers are connected to each other in a shared external storage, usually a storage area network (SAN ). The database files are stored in the shared storage, and the SQL Server instances configured in the same settings run on the cluster nodes.
Of all the nodes in the cluster, only one node is active all the time. If this node fails, other nodes will start the corresponding SQL server instance, and connect the data files in the shared storage. The entire failover process usually takes only a few seconds. For any given SQL server instance, the Windows Cluster Technology ensures that the client is always watching the active node.
The cluster technology is very complex, but it is the most efficient technology to achieve high availability. Compared with the preceding two functions, a cluster depends on a single database file set. If these files are damaged, failover will not work, because the Failover instance is the same as the damaged files. By using images and log transmission, you can copy files in real time, so you don't have to worry about file damage. In SQL Server, File Corruption rarely occurs, so I think the cluster should be a good choice.
Disadvantage. One of the important problems is that the implementation of fault recovery is very expensive. Microsoft only supports fault recovery on Windows Server-certified hardware. Another problem is that shared storage is required.
1. Database Replication
SQL Server 2008 database replication synchronizes data between multiple servers through the publishing/subscription mechanism. We use this mechanism for Synchronous database backup. The synchronous backup here refers to the real-time data synchronization between the backup server and the master server. Normally, only the master database server is used, and the backup server is only used when the master server fails. It is a database backup solution that is better than file backup.
SQL Server replication is divided into the following types:
1. snapshot publishing:
The publishing server sends a snapshot of published data to the subscription server at a scheduled interval. Delete all the data in the corresponding table in the subscription database at intervals, and insert all the data in the corresponding table to the subscription database.
Using snapshot replication is the most appropriate:
1) Data is rarely changed.
2) data copies that are outdated with the Publishing Server are allowed within a period of time.
3) copy a small amount of data.
4) there will be a large number of changes in the short term.
Snapshot replication is the most suitable option when the data volume is large but few changes occur.. For example, if a sales organization maintains a product price list and these prices are completely updated once or twice a year at a fixed time, we recommend that you copy the complete data snapshot after the data is changed. For a given type of data, more frequent snapshots may also be suitable. For example, if a relatively small table is updated on the Publishing Server in one day but a certain lag time is acceptable, the changes can be transmitted at night in the form of snapshots.
The continuous overhead of snapshot replication on the Publishing Server is lower than that of transaction replication because incremental changes do not need to be tracked. However, if you want to copy a large dataset, a large amount of resources will be required to generate and apply snapshots. When evaluating whether to use snapshot replication, consider the size of the entire dataset and the frequency of data changes.
2. Transaction publishing:
After the subscription server receives the initial snapshot of published data, the publisher streams transactions to the subscription server.
Transaction replication usually starts from releasing database objects and data snapshots. After an initial snapshot is created, the data changes and architecture changes made on the Publishing Server are usually transmitted to the subscription server when the changes occur (almost in real time. Data changes will be applied to the subscription server according to the order they occur on the publisher and the transaction boundary. Therefore, transaction consistency can be ensured within the publication.
Transaction replication is applicable in the following situations:
1). Spread it to the subscription server in case of an incremental change.
2) When a change occurs on the publishing server and the change arrives at the subscription server, the application requires a short lag time between the two.
3) The application needs to access the intermediate data status. For example, if a row is changed five times, transaction replication allows the application to respond to each change (for example, to trigger a trigger), not just to respond to the final data change of the row.
4). The publishing server has a large number of insert, update, and delete activities.
5). The publisher or subscription server is not an SQL Server database (for example, Oracle ).
By default, the subscription server for transaction publishing should be considered read-only because the changes will not be transmitted back to the Publishing Server. However, transaction replication does provide the option to allow updates on the subscription server.
3. Publish a transaction with updatable subscription:
After the SQL Server subscriber receives the initial snapshot of published data, the publisher streams transactions to the subscriber. Transactions from the subscription server are applied to the Publishing Server.
4. Merge and publish:
After the subscription server receives the initial snapshot of published data, the publisher and subscription server can independently Update published data. Changes are periodically merged. Microsoft SQL Server compact edition can only subscribe to combined Publishing.
Similar to transaction replication, merged replication usually starts from the snapshot of the released database objects and data, and uses triggers to track subsequent data changes and architecture changes made on the Publishing Server and subscription server. When the subscription server connects to the network, It synchronizes with the Publishing Server and exchanges all the lines that have changed between the publishing server and the subscription server since the last synchronization.
Merge replication is usually used in the environment from the server to the client. Merge replication applies to the following situations:
1). Multiple subscription servers may update the same data at different times and spread the changes to the Publishing Server and other subscription servers.
2) The subscription server needs to receive data, change the data offline, and synchronize the changes with the publisher and other subscription servers in the future.
3). Each subscriber needs different data partitions.
4). Conflicts may occur. When a conflict occurs, you must be able to detect and resolve the conflict.
5). The application needs the final data change result instead of accessing the intermediate data status. For example, if the row on the subscription server is changed five times before the subscription server synchronizes with the publishing server, the row is changed only once on the Publishing Server to reflect the final data change (that is, the value of the fifth change ).
Merge replication allows different sites to work independently, and merge updates into a unified result in the future.. Because updates are performed on multiple nodes, the same data may be updated by the publishing server and multiple subscription servers. Therefore, conflicts may occur when merging updates. Merging and copying provides multiple methods to handle conflicts.
Disadvantages of replication:The table has a primary key, and the table structure cannot be changed in the future. If the architecture is stable, it will be troublesome if there are many tables.
Replication method and process:
Http://www.cnblogs.com/dudu/archive/2010/08/26/1808540.html
Http://www.cnblogs.com/killkill/archive/2009/07/17/1525733.html
Http://dufei.blog.51cto.com/382644/84645
Http://www.cnblogs.com/wangdong/archive/2008/10/24/1318740.html
Ii. database image:
Database image:
The advantage is that the system can automatically detect master server faults and automatically switch to the backup storage.
The disadvantage is that the configuration is complex and the data in the image database is invisible (in SQL Server Management studio, only the image database is in the image state, and no database operations can be performed, and the simplest query is not supported. Check whether the data in the image database is correct. You can only switch the image database to the primary database)
Compared with log transmission, database images are obviously higher. In the simplest form, it is actually similar to the working principle of log transfer, but the production server sends transactions to the backup storage more frequently, which means the update speed is much faster.
For database images, the Failover function also needs to be completed manually. However, you can add the third SQL Server called witness. Witness can be used as a common SQL Server, but it keeps an eye on the other two image servers. When the primary image fails, witness allows the second image to take over the operation, similar to an automatic failover.
During failover, any client transaction in progress will be restarted. However, due to the delay in this process, the backup storage cannot guarantee zero data loss.
Iii. Database Log transmission:
As the lowest form of high availability, log shipping is essentially an extension of the SQL Server replication function.
Allows solution providers to create multiple database copies. Log transmission can synchronously send secondary Database Log copies to SQL Server instances. The logs are then replayed on the secondary server to keep the database copy up-to-date.
Some solution providers use Log transmission as a way to overcome the disadvantages of database images. Database image is a good technique, but it only allows us to implement one database copy. Images can be performed in close to real-time mode, so database modifications can be quickly written to secondary databases. If the customer database is damaged or the database records are accidentally deleted, this may cause problems.
Log transmission has two main advantages. First, the solution provider can implement a latency so that logs will not be instantly replayed. This is important because logs can be intercepted before replay if there is a problem with the primary (or image) database, thus preventing the spread of the problem.
The second major advantage of log transmission is that it supports multiple database copies. Some organizations use Log transmission as a method to maintain database copies in the backup data center, which can prevent data loss in the primary data center.
Although log transmission is a complementary measure of database images, it is an independent technology that can be used independently without relying on the image technology.
Http://www.searchdatabase.com.cn/showcontent_11708.htm
Iv. Failover Cluster
Cluster technology is the most advanced form of Microsoft availability. It requires you to set up a Windows Cluster.
In a cluster, data transmission and images are not involved. Instead, two or more computers are connected to each other in a shared external storage, usually a storage area network (SAN ). The database files are stored in the shared storage, and the SQL Server instances configured in the same settings run on the cluster nodes.
Of all the nodes in the cluster, only one node is active all the time. If this node fails, other nodes will start the corresponding SQL server instance, and connect the data files in the shared storage. The entire failover process usually takes only a few seconds. For any given SQL server instance, the Windows Cluster Technology ensures that the client is always watching the active node.
The cluster technology is very complex, but it is the most efficient technology to achieve high availability. Compared with the preceding two functions, a cluster depends on a single database file set. If these files are damaged, failover will not work, because the Failover instance is the same as the damaged files. By using images and log transmission, you can copy files in real time, so you don't have to worry about file damage. In SQL Server, File Corruption rarely occurs, so I think the cluster should be a good choice.
Disadvantage. One of the important problems is that the implementation of fault recovery is very expensive. Microsoft only supports fault recovery on Windows Server-certified hardware. Another problem is that shared storage is required.