The IBM3582 tape library contains 14 tapes as a storage pool. This storage pool is used to back up the two-node FS data, and Rman's backup data is written inside. The data is written directly to the storage pool, not through the disk storage pool. Today, with the Q LIBV command to check the number of temporary volumes found only 3 disk. The number of temporary volumes remained between 7-8 disks two months ago. Two weeks ago the number of temporary volumes between 5-6 disk, did not pay much attention at that time, thought is the increase in the amount of data caused. This may not be the case now.
Tsm:server1>q Vol.
Volume Name Storage Device estimated Pct Volume
Pool name Class name Capacity Util Status
------------------------ ---- ------- ---------- --------- ----- --------
RP0000 3582POOL 3580 623,900.1 41.5 full
RP0001 3582POOL 3580 692,041.7 10.6 Full
RP0002 3582POOL 3580 289,503.8 1.2 full
RP0003 3582POOL 3580 690,432.4 10.8 Full
RP0004 3582POOL 3580 787,172.1 6.3 Full
RP0005 3582POOL 3580 773,027.9 78.4 Filling
RP0006 3582POOL 3580 381,468.0 2.0 Filling
RP0007 3582POOL 3580 723,873.8 0.7 full
RP0008 3582POOL 3580 381,468.0 23.3 Filling
RP0010 3582POOL 3580 287,254.3 81.7 full
RP0013 3582POOL 3580 693,343.4 10.2 Full
First check the configuration: Stgpool delay Period for Volume reuse parameter value is 0, the volume's Scratch Volume property is yes. This means that the data on Vol can be used again as soon as it is all expired. For more information, please refer to another article on my blog: "Dealing with scratch volumes in TSM". Check the tsmserv.opt configuration file, Expinterval 24--do an expiration every 24 hours. This should be no problem either. I'm on RMAN now delete expired data: Rman>delete noprompt obsolete
Then do the expiration data processing on TSM manually: Tsm:server1>expire inventory
Review the use of vol again, the problem is the same. View status is Full vol. content: Q content RP0001 F=d
A node data that should not be backed up was found. You do have this schedule in TSM to view the scheduling list of customer nodes. Look at the scheduling content and find that the data to be backed up is the same as the data in vol. When the problem came, it suddenly dawned on me. We have a backup plan for a node when we implement TSM, so we define the corresponding backup strategy, but later we actually use it to find out that this backup is unnecessary, so we deactivate the DSMC Sche process of the client node and the backup is not done. After restarting the server, the DSMC shce process starts automatically, causing the backup to continue, with a large amount of tape space. (The backup directory is the archive path to the Oracle database, so the data volume is large) it's easier to solve this problem.
First Tsm:server1>q Filespace
Node Name filespace fsid Platform filespace is files-capacity Pct
Name Type Pace (MB) Util
Unicode?
--------------- ----------- ---- -------- --------- --------- -------- -----
erpdb/orc9i_db TDP ora-api:orac-no 0.0 0.0
CLE AIX LE
Erpdb/install TDP ora-jfs2 No 204,800. 5.9
CLE AIX 0