The user recently reported that the crawling process of OGG deployed on RAC Node 1 has been stopped for several days. You want to troubleshoot the problem and restore it to normal. Problem Analysis Step 1: view the log by using the infoall command to view the status of the current crawling process, as follows, the status is ABENDED. In this case, the simplest way is to view the error log information of OGG.
The user recently reported that the crawling process of OGG deployed on RAC Node 1 has been stopped for several days. You want to troubleshoot the problem and restore it to normal. Step 1: run the info all command to view the status of the currently crawled process. The status is ABENDED. In this case, the simplest way is to view the error log information of OGG, this letter
The user recently reported that the crawling process of OGG deployed on RAC Node 1 has been stopped for several days. You want to troubleshoot the problem and restore it to normal. Problem Analysis
Step 1: run the info all command to view the status of the current crawling process. The status is ABENDED. In this case, the simplest way is to view the error log information of OGG, the information file name is ggserr. log:
OGG-01028 error reported here, you can see that the prompt is not clear, can not find/u01/app/oracle/archive2/2_32522_828663513.dbf archive log, because we here is the RAC environment, using NFS, mount the archive directory of Node 2 to the/u01/app/oracle/archive2/directory of Node 1. Therefore, this path should contain the archive logs of Node 2. We check that the directory does not exist. Because it is a RAC node, we can check it at Node 1 and find that the log file is on node 1. This is because the user has restarted the server in recent days, due to the VIP jump, logs originally archived on node 2 are archived on node 1. Therefore, we only need to copy the logs to node 2. Solution Process
Step 1: Copy logs
We copy all the logs that should be archived to node 2 in node 1 back to node 2 and use the SCP command to remotely copy the logs, as shown below: [oracle @ rac01 archive] $ scp2_32 * 192.168.30.3: /u01/app/oracle/release 100% 10 MB 10.0 MB/s 00:01 2_32523_828663513.dbf 100% 1024 100% 1.0KB/s 00:00 00 then 25.4 51 MB MB/s after copying, try to start the capture process again, normal start
Key knowledge point RAC: In the RAC environment, services on a node will be rolled back to generate archived logs, but the node cannot be accessed due to unexpected shutdown of a node, therefore, archive logs generated by rollback are generated on other nodes. This is a common archive log migration.