DB2 back archive usage and proper maintenance

Source: Internet
Author: User

The following articles mainly describe the use and proper maintenance of the DB2 back archive. Do we want to manage IBM in a better way?®DB2®For Linux®Or AIX®Diagnostic files on the server? If so, this article will help you.

A script is provided to help you archive and maintain these files. By compressing and deleting old files, you can further simplify the management process.

Introduction

With the increasing application of autonomous technology, DB2 database servers may generate large message log files, management notification log files, and event log files. This is especially evident in large data warehouse environments with many logical and physical partitions. In addition, DB2 often generates a large amount of diagnostic data to meet the needs of data capture for the first time when a problem occurs.

The increase in logging activity also increases the occupied file system space, leading to manageability problems. It is not feasible to simply delete log files because DB2 support personnel often require users to provide historical diagnostic data, especially after studying the current issue and migrating the instance.

This article introduces a new script that can be used to perform maintenance tasks on the diagnostic logs and data of DB2 instances. This script is called db2dback. ksh and can be obtained through the zip file downloaded later. This script can be run in a single partition and multi-partition environment. It considers different user settings. Different physical partitions can use shared or independent diagnostic data paths.

Script Overview

The db2dback. ksh shell script can archive diagnostic data from the diagnostic data path (DIAGPATH) configured by the DB2 database instance. You can also maintain the archived data in the target archive directory.

The owner of the DB2 instance should regularly run this script. You can run this script manually or by using a scheduling tool such as a cron job.

This script can currently process DB2 instances on AIX and Linux operating systems. In these two environments, it can process single-partition instances or multi-partition instances created with Data Partitioning Feature (DPF), and also include Balanced Warehouse settings. In the DPF environment, this script supports different instance configurations:

Share a single DIAGPATH among all partitions

Each physical partition uses a separate DIAGPATH

Note: DIAGPATH is a parameter value configured by the DB2 database administrator. If this parameter is not set in the instance configuration, use the default value $ HOME/sqllib/db2dump of the DB2 instance owner. For more Information about Database Management Program configuration parameters, see DB2 Information Center.

Installation script

The DB2 instance owner can follow these steps to install the script:

Obtain the db2dback.zip file from the download section below.

Extract the db2dback. ksh script from the zip file.

Copy db2dback. ksh to the sqllib/bin directory of the DB2 database instance.

You must have the execution permission required to remotely execute the script on the DPF settings.

The following command example sets the correct execution permission:

 
 
  1. cp db2dback.ksh ~/sqllib/bin  
  2. hmod 755 ~/sqllib/bin/db2dback.ksh 

Get script help

You can run the db2dback. ksh script with the-h command line option to display the help of the script option:

 
 
  1. $ db2dback.ksh -h  
  2. 04-01-2009 13:13:25: DIAGPATH is set to /home3/agrankin/sqllib/db2dump  
  3. Usage: db2dback.ksh [-ahzvptl] [-o   ] [-r   ]  
  4. Options:  
  5. -h Print help message  
  6. -a Archive diagnostic data  
  7. -r   Remove diagnostic archives that are >   
  8. then   old. Can be combined with -a  
  9. -o   Specify output directory  
  10. -z Compress diagnostic data tar archive  
  11. -v Verbose output.  
  12. -p Run diag data archiving in parallel  
  13. (default is sequential).  
  14. -l Local execution. This is used in cases   
  15. when db2dump is shared by all partitions.   
  16. It also can be used if archive runs on   
  17. just single physical partition.  
  18. -t Suboption for -a, archives data to a   
  19. tar archive at destination. 

The following describes different options in detail.

Specify the target archive directory)

If the target directory is not specified on the command line, the script uses the DIAGPATH/db2dump_archive directory as the default target. If this directory does not exist, the script will create it.

You can create a DIAGPATH/db2dump_archive link to point it to another local or NFS mount file system with sufficient space. In DPF settings with multiple physical partitions, if the physical partition does not share the diagnostic path directory, you must create this link on each physical partition.

Archive

Use the-a (archive) command line option to archive diagnostic data from DIAGPATH:

 
 
  1. db2dback.ksh -a [-o   ] 

By default, a script on the DPF system tries to run its local version on each physical partition using the command. If all physical partitions share DIAGPATHBCU, we recommend that you do not.) You can use the-l sub-option to call the local version of the script.

The script renames db2diag. log and the management log file to db2diag. log. And . Log. And then create a new log file for the instance. The script then uses the UNIX mv command to transfer all files and directories in DIAGPATH, except for the following files and directories:

The newly created db2 database diag. log and the management notification log file.

The Self-tuning Memory Manager (STMM) log file in the stmmlog directory. STMM automatically manages the space used by its log files. Generally, the total space does not exceed 50 MB.

Any diagnostic data file created within 15 minutes or the first data capture (FODC) directory. This is to ensure that files are not allocated to different archives or targets when archiving is started during diagnostic data dumping.

All files transferred from DIAGPATH to the new target retain the original directory hierarchy. Transfer all files to sub-directories using the following naming conventions:

 
 
  1. db2dback. .YYYY-MM-DD-hhmmss  

Use the-t command line option to create a tar archive for all diagnostic data files in the target directory:

 
 
  1. db2dback.ksh -a -t [-o   ]  

Delete files that have been copied to the tar archive from the source directory. The preceding file exception rules also apply to tar archives. The tar file adopts the following naming conventions:

 
 
  1. db2dback. .YYYY-MM-DD-hhmmss.tar 

Use the-z command line sub-option to compress files in the target directory. By default, scripts use gzip to compress files. If the script cannot find the gzip command on the system, it will try to use the compress utility. This option can be used together with the-t sub-option, or separately:

 
 
  1. db2dback.ksh -a –z [-o   ]  
  2. db2dback.ksh -a -t –z [-o   ]  

When sending data to the tar archive, the tool compresses the archive at the end. If you want to transfer data without the-t option), compress each transferred file in the target directory. Only files larger than Kb are compressed.

By default, diagnostic data archiving on the DPF system is continuous, which means that the tool archives data in one physical partition each time. Use the-p sub-option to archive all physical partitions at the same time. This inserts the | & prefix in the DB2 rah command in the script. For more Information about how to use the commands of the command, see the DB2 database Information Center.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.