The tablespace adjustment record of an oracle RAC database was originally thought to be a matter of several commands, but it turned out to be overwhelmed. The reason is that some commands are not processed properly.
Clearly, I only know how to use it, but I don't know the important information related to it. As a result, in order to try and make a mistake, I am glad that I didn't come up with a demon girl. This is probably the case at www.2cto.com. Our oracle RAC database needs to adjust the tablespace (using bare devices), a 2 TB table.
The space needs to be reduced, and the released space needs to be expanded to another tablespace. This operation has been done before, and the commands have been sorted and kept,
This time, we changed the tablespace name. Step 1: expdp \ "sys/sys as sysdba \" DIRECTORY = dpump_dir dumpfile = 20130203.dmp logfile =
20130203_exp.log tablespaces = 20130203 of BAKDATA generation. dmp file more than 800 GB Step 2: www.2cto.com drop tablespace bakdata including contents; when you execute this ERROR, the ERROR message is as follows ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1ORA-01555: snapshot too old: rollback segment number 0 with name "SYSTEM" toosmall solution is to execute the following command drop tablespace BAKDATA including contents and datafiles; Step 3: rmlv-f users5_200g has encountered this error 0516- 1008 rmlv: Logical volume users8_600g must be closed. if the logical volumecontains a filesystem, the umount command will close the LV device. at this point: lsvg-l datavg -- view the lv statelsvg-p datavg -- view the disk status of the vg. Because the database is still open, even if the tablespace has been dropped, however, some processes use this lv to restart and shut down the database in order to keep this lv in the closed state. However, this lv deletion operation did not encounter the above problems and the operation was successfully deleted. Step 4: mklv-y 'users5 _ 200g'-w 'N'-s 'N'-r'n'-t 'raw 'datavg 2400 chown oracle: dba/dev/rusers5_200g re-CREATE the lv and change the user. Step 5 of the user group www.2cto.com: CREATE bigfile tablespace bakdata datafile '/dev/rusers5_200g' size 1199g. Finally: impdp \ "sys/sys as sysdba \" DIRECTORY = dpump_dir dumpfile = 20130203.dmp logfile =
20130203_imp.log tablespaces = BAKDATA because this step takes a long time, so I went out shopping with LD and asked it to run the results slowly and receive an alert via text message or email, first, the utilization of the file system where the archived logs are located reaches 100%, and then the RAC data
Db1.srv of the database is OFFLINE, So I immediately returned to the computer to check the cause. First, there was no write operation in the database.
How can we generate so many archive logs? With the help of my colleagues, we found that oracle imp commands also generate archive logs (when the database is enabled
At this time, the database's Level 1 Incremental backup is also started from the crontab, and the system's I/O is also large, with a large number of reads.
Write generation (usually there should be no much data at this time, because the tablespace has been adjusted and there are still a lot of archive logs). Next, we will first expand the file system where the archive logs are stored online, in this case, the db1.srv of the RAC database is also automatic.
ONLINE, database access is normal. The above imp import process also continues to run. Here is an episode. My Terminal is too long to be
The session is automatically disconnected by the AIX system, but the imp process has not been terminated. I have read that its parent process has changed to 1.
The big BOSS took over, but the log file www.2cto.com was still updated in 20130203_imp.log. Then we killed the level 1 Incremental backup of rman. After the imp process is successfully completed, all the backup workers before the database
Delete the archive logs, delete all the archived logs, and perform full backup for the database at level 0. After completing these operations, contract the file system where the archived logs are stored.
Smaller than the previous size. Everything is okay. It seems that there is a way to make the oracle database that has enabled archiving logs do not generate an archive when importing data through imp.
Log. Please wait and sort it out later!