Introduction to basic DB2 log shipping knowledge

Source: Internet
Author: User

This article describes how to use IBM®DB2®Universal Database™Configure the concept and implementation of log transfer. As the database system becomes more and more important for the success of the enterprise, the demand for uninterrupted (24x7) availability throughout the day has become unprecedented. A common method to provide 99.99% (or "four nine") availability is to implement a "hot" backup database server. Using backup servers is not a new concept. Database Administrators (DBAs) have been using this method for many years. Generally, the backup server requires the DBA or operator to manually create a backup of the database and log on the master system, and then regularly restore the backup to the backup server. If the primary server fails, the downtime is limited to the amount of time required to process log files since the last backup was restored to the backup server.

Failover of backup servers is generally not automatic. The related personnel must determine whether the time required to enable the backup server is less than the time required to fix the original fault on the primary system.

What is log transfer?

Log shipping is a method that automatically backs up transaction logs from the master DB2 server and makes the backup automatically accessible to the slave server. Once the log file is stored on the backup server, it can be synchronized with the master server.

Why does it take effort to transfer logs?

What are the benefits of log shipping? Why do you need to set it for time? Log transmission provides the following advantages:

Redundant failover systems can be implemented without costly software or hardware. From the perspective of hardware and software, the master server and slave server do not have to be the same. The backup server can be used for other purposes without having to be idle for a long time. For example, when the secondary database is inaccessible because it processes log files, another independent database can be run on the standby server.

Once configured, the configuration costs are relatively low and easy to maintain.

There are very reliable methods for providing redundant copies of the database.

Because the database is in the roll-forward mode, it provides hot backup, Hot Backup enables the database to start, and data cache has been prepared.

It can be configured to allow a very small amount of data loss when a fault occurs (if any ).

Configuration implementation and maintenance costs are relatively low.

Supports local location and disaster (remote) solutions.

Are there limits?

Log transmission has some limitations. Unlike systems like HACMP or Veritas Cluster, it does not provide complete functions. However, it does not require additional hardware or software. This can be attributed to a trade-off between cost, availability, and complexity. Log transfer is a practical solution for most customers who require redundant systems but can accept data loss during the Failover plan.

Log transmission can be completely automated only when additional software is used. The DBA or operator must still manually transfer the functions of the master server to the slave server in the event of a fault; however, scripts can be prepared for the fault to minimize human intervention. The time when the user is interrupted is equal to the total time required to replay one or more log files and roll back from any incomplete transactions, plus the time required to reconnect to the user's application. The time required to bring the backup database online depends on the frequency of log files processed by the backup server and the size of log files.

Once the database is switched to the backup server, you must change the client application so that it can point to the new server. Alternatively, you can transfer the Host Name and IP address of the server.

Operational considerations

When to reinitialize the backup database

When re-indexing on DB2, a log record is written to the log to indicate that the operation has been started. When the standby database processes this log record, it does not automatically recreate the index on the standby server. (By setting the Database Manager INDEXREC configuration value) You can configure DB2 to rebuild the index after it is connected to the database for the first time after it is out of the pending roll status (for example, when taking over, or configure to re-create the index when you try to access the index for the first time. No matter which method is used, the end user will detect performance degradation in the case of system failover. One way to prevent this is to refill the backup database from the backup image of the primary database, or use I/O temporarily suspended and separated Image Technology to refresh the index.

Running the DB2 loading utility on the primary database affects the backup database server. When you call the LOAD command, you can choose to let the loading utility prepare the backup image of the mounted tablespace, or delay the creation of the backup image to a certain time in the future. If you choose to have the Mount utility create a backup image, the backup server must have access to the target device used by the Mount utility. If you choose to back up the data later, or the backup image is unavailable when the replays logs are loaded, the backup server puts the mounted tablespace in the recovery hold state. In both cases, you should refresh the backup database after the load operation is complete to ensure that all data on the backup server is accessible when failover is required.

Prerequisites

The following are prerequisites that must be met before you set and configure log shipper on DB2:

Both the master and slave systems must run the same version of DB2. It can be moved to the slave server to install the DB2 new revision package on the master system; however, the version must be the same or higher. You cannot use this method to roll back to a revision package because the two systems must not only run the same level of DB2, but also the same level of operating systems.

The backup system must have at least the same disk space as the primary system. You must consider the possibility that the master server may be unavailable for several days when the fault is transferred to the slave server.

All automated processes that the master server runs for database maintenance must be configured on the slave server. DB2 allows you to configure only one user exit program for each instance. If an active database already exists on the backup server, the DB2 instance it uses should be independent of the DB2 instance of the Master System.

The log archiving target on the backup server must be accessible. After a failover, you must save the log file so that the primary database can be restored online. You must restore the complete database backup on the backup system to initialize the hot backup. After the backup is created, all log files generated on the master system are also required.

What options are available?

There are multiple methods to implement log Transfer Using DB2. This article discusses some popular methods.

In all cases, the backup server requires a scheduled job to regularly issue the db2 rollforward db to end of logs command. The frequency of running this command determines the speed at which the slave server can be used in case of failover.

This frequency can also be used to protect the database from application errors. For example, if the backup server remains several hours behind the master server, and an application destroys data in the database, the database can fail over to the backup server, data destroyed by "rollback" has little impact on users.

All log shipper configurations are implemented using the user exit program. This is the only method that can be used to manage log files in DB2. When a log file is full, the DB2 recorder archives it. Then, the db2uext executable file is responsible for processing the log file.

Does log shipper have different types?

There are two ways to transfer logs. In the pull method, the slave server pulls log files from the central shared location (such as the log archiving target) as needed. In the push method, the master server ensures that these log files reside on the slave server when it archives the master log files.

DB2 archives log files to the target directory specified by the user Export Program db2uext2. The sample of the user exit program is located in the DB2 instance directory sqllib/samples/c. Which includes disks, tapes, and Tivoli? Storage Manager example.

Pull Method

The pull-out method involves configuring the user exit program on the Master System to archive log files to the target device that both the master server and the slave server have access. The backup server does not receive notifications about archiving log files, and must check the target archiving path. You can use the sample user Export Program of db2uext2. cdisk or db2uext2. cadsm (which will be renamed as db2uext2. ctsm in future DB2 versions) to do this. The user exit executable file must be located in the default DB2 instance path of the master and backup systems.

When the database rollback command is called on the slave server, the DB2 recorder automatically attempts to retrieve the next consecutive log file from the archive target path. The rollback operation continuously retrieves log files until there are no files to be processed.

Push method

Using the push method, you can modify the user exit program, copy the log file, FTP the active Log Path to the backup server, or the overflow Log Path that can be accessed on the backup server. You can achieve this by modifying the db2uext2. cdisk sample program and specifying the Log Path of the backup server as the target.

When the roll forward db command is called on the slave server, the DB2 recorder automatically attempts to retrieve the next consecutive log file from the archive target path. The rollback operation continuously retrieves log files until there are no files to be processed.

How to Set it?

Regardless of the pull method or push method, most of the Setting Processes are similar to the steps described below:

Configure the database to enable the user exit program and log archiving. After this is done, the database will be in the STANDBY state. This backup image will be the initial start point for recovery and should be retained until the next full database backup is executed.

Place the executable file exported by the user to a location in the default search path of the DB2 instance. The sample source code module of the DB2 user exit program is located in the sqllib/samples/c directory of the DB2 instance. They are:

Db2uext2. cadsm-Support for Tivoli Storage Manager, also known as ADSM

Db2uext2. cdisk-disk support

Db2uext2. ctape-support for local tapes, only available for UNIX? System

Db2uext2. cxbsa-Support for XBSA Draft 0.8 clients

Each of these sample programs only needs to be slightly modified (such as buffer_size, audit_log_activation, audit_log_path, error_log_activation, and error_log_path ). Each sample program contains the correct compilation statement that must be issued once the modification is completed.

Some third-party vendors (such as Veritas, Legato, and SAP) provide their own DB2 user export binary code, all of which can be used for log transmission.

Initialize the database of the backup server. You can recover the full DB2 backup image of the master server (online or offline) or use a separate image copy. Detailed information about how to use the detached image copy is described below. In both cases, the hardware of the standby database does not have to be the same as that of the primary database. The number and size of processors and disks can be completely different. The only limit is that the size of each tablespace in the standby database must be at least as large as that in the primary database. This is to prevent the occurrence of a situation where the backup system space is exhausted and the main system continues to grow. If the physical disk layout is different, you need to perform redirect recovery to initialize the backup database.

Configure scheduling jobs on the standby system to regularly issue the db2 rollforward to end of logs command. This will process the log records received from the master server and keep the logs of its slave server up-to-date.

Now the backup server is ready.

What should I do if "four nine" is not good enough?

There are many ways to ensure zero data loss in the log shipper configuration. However, additional configurations and/or hardware are required. Let's look at some of the more popular methods for implementing backup servers without data loss.

Create an image to Send Logs

One way to ensure that no data is lost is to create an image for the volume containing the log file. You can use the disk/volume image function of the operating system to implement this method. When this method is used, each log record written to the primary database is also written to the standby database. Each log record is written to these two systems, ensuring no data loss. The disadvantage of this method is the performance cost related to two disk write operations. One write operation may be remote.

Log transmission through dual-record

Another way to avoid data loss is to use DB2's dual logging function. When this feature is used, DB2 writes the same log to two locations. One of these two locations may be a remote file system. DB2 tries to write each log record to two log paths. If an error occurs in one of the paths, the error message is recorded in the db2diag. log file, and the processing continues. If the write operation fails for one of the paths, DB2 will not attempt to write to the path unless the active log file is full. DB2 will not synchronize the two log paths again after the connection is established again. This method is feasible only when the network connection between the primary and standby systems is highly reliable.

Use smart Storage System

Currently, many smart storage systems (such as ibm ess, EMC, and HDS) are available, providing Disk Mirroring capabilities for local or remote storage systems. Each of these systems provides a synchronous or Asynchronous Method for creating a file system image. With the intelligent storage system, the implementation of Log File Images Between the Master System and the backup system will be greatly simplified and very reliable.

Conclusion

In short, log transmission is a relatively simple and inexpensive way to provide redundant failover systems. It is easy to set and maintain and can be used to support both local and remote locations. This disaster recovery method does not burden the current database administrator, because once the settings are completed, it can run automatically.

(

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.