The CRS and database software of RAC have been installed and upgraded to 10.2.0.4. An error cannot be solved. No way only
Reinstall CRS.
(Error similar to srvctl add service fails with PROC-5 [ID 466952.1]
However, this error occurs when I create a database. The error is reported when the service is added .)
However, the previous step took about five hours. If everything is repeated. The cost is too high. I found an IT pub expert online
Documentation. Reinstall CRS. Actually succeeded.
My library is a bare device with AIX 5310 + powerha 5.5 + Oracle 10.2.0.4 +
I performed steps 1 to 9 in the detailed steps below. I have not moved any location that needs to be modified, such as the VIP address or vote. Just delete all the items according to its requirements
Delete. CRS was successfully rebuilt! Check the CRS version and confirm it is 10.2.0.4.
|
A newly installed RAC database needs to be rebuilt because it needs to replace the storage. Because the host system is not re-installed, you only need to re-build the CRS and database.
Environment: the AIX 5306 + hacmp 5.2 + Oracle 10.2.0.1 + hosts device. changes are made to information such as node names and network configurations.
The procedure is as follows:
1. Modify the host configuration on two nodes: because the network is changed, the VIP needs to be changed. Therefore, modify the/etc/hosts file and change the address corresponding to the VIP address to the new IP address.
2. Modify/etc/Oracle/OCR on two nodes. loc file, replace the address after ocrconfig_loc = in the file with the new swap device name that stores OCR (if the cluster file system is used, it is the file name)
3. delete the file/etc/Oracle/scls_scr/<node name>/Oracle/cssfatal on the two nodes.
4. On the two nodes, go to the $ ora_crs_home/install directory, modify the paramfile. CRS file, and modify the changed configuration data. Crs_ocr_locations, crs_voting_disks, crs_nodevips
5. Run the DD command to clear the hosts device that stores OCR config. (For the cluster file system, you only need to delete the OCR config file ). Here, the role device name is rocr, dd If =/dev/zero f =/dev/rocr BS = 4096 COUNT = 10000 (if OCR already exists, it just needs to be rebuilt, this step is required. Even if the device is completely created on the primary device, you may encounter some inexplicable problems in the subsequent steps. You also need to clear the primary device with DD, and the size of the DD should not be too small, in the case of BS = 4096, values such as count is 10 appear too small, and problems will occur later)
6. Modify the $ ora_crs_home/install/rootconfig file on the two nodes and change the variable before the file. Crs_ocr_locations, crs_voting_disks, crs_nodevips
7. If you operate on the host through remote telnet or SSH, set the DISPLAY variable. Export display = x. x: 0.0. Here, x. x is the IP address of the terminal for the operation. Run software such as xmanager on the operating terminal.
7. run $ ora_crs_home/install/rootconfig as the root user on node 1. Be sure not to run rootinstall
8. After Node 1 is fully run, run $ ora_crs_home/install/rootconfig on node 2. Normally, the VIP settings window is displayed. If the VIP settings window does not pop up, check whether the problem occurs only when vipca is started.
9. Run crs_stat-T on two nodes. If CRS has no resources or resources related to VIP are started (when VIP has been set), the CRS has been established successfully.
10. If no VIP is configured, run vipca as the root user and configure the VIP. Note: In the pop-up window, when you are prompted to select a network interface, select public interface. (If an interface exception is displayed, run the oifcfg command in shell to check the network interface. If necessary, use this command to reconfigure the network interface)
11. Now the CRS has been configured. Use crs_stat to check whether the CRS is running normally. If not, check the CRS log. In this case, resources such as VIP, ONS, and GSD should be run. Run ifconfig-A on the two nodes to check whether the vip has been bound to the public NIC (make sure it is on the public Nic. Sometimes, the VIP has started, but it is actually tied to the private Nic)
12. Clear the original listener settings and confirm that the listener is in the closed state. Run netca and configure the listener. After the configuration is complete, the listener will be automatically added to the CRS.
12. Because the original database creation script exists, open the original script and change the corresponding data file name to the new file name (the ghost device name)
13. Run the database creation script (shell script) on node 1)
14. After waiting patiently, the database on node 1 is created.
15. Run the database creation script (shell script) on node 2. This process is very fast.
16. Modify tnsnames. ora on the two nodes as follows (based on the actual situation ):
Listeners_dmdb =
(Address_list =
(Address = (Protocol = TCP) (host = dm1-vip) (Port = 1521 ))
(Address = (Protocol = TCP) (host = dm2-vip) (Port = 1521 ))
)
Dmdb =
(Description =
(Address = (Protocol = TCP) (host = dm1-vip) (Port = 1521 ))
(Address = (Protocol = TCP) (host = dm2-vip) (Port = 1521 ))
(Load_balance = yes)
(CONNECT_DATA =
(Server = dedicated)
(SERVICE_NAME = dmdb)
)
)
Rac2 =
(Description =
(Address = (Protocol = TCP) (host = dm2-vip) (Port = 1521 ))
(CONNECT_DATA =
(Server = dedicated)
(SERVICE_NAME = dmdb)
(Instance_name = rac2)
)
)
Rac1 =
(Description =
(Address = (Protocol = TCP) (host = dm1-vip) (Port = 1521 ))
(CONNECT_DATA =
(Server = dedicated)
(SERVICE_NAME = dmdb)
(Instance_name = Rac1)
)
)
Extproc_connection_data =
(Description =
(Address_list =
(Address = (Protocol = IPC) (Key = extproc1 ))
)
(CONNECT_DATA =
(SID = plsextproc)
(Presentation = Ro)
)
)
17. Modify the initialization parameter remote_listeners of the two nodes to 'listeners _ dmdb', and local_listener = '(address = (Protocol = TCP) (host = IP1) of Node 1) (Port = 1521) ', node 2
Local_listener = '(address = (Protocol = TCP) (host = ip2) (Port = 1521 ))', note that IP1 and ip2 are the VIP addresses of Node 1 and node 2 respectively (note that they must be IP addresses rather than host names ). Local_listener is set to avoid ORA-12545 errors when using Server Load balancer.
18. Create a spfile on one of the nodes. Here, spfile is set as the firewall rspfile: Create spfile = '/dev/rspfile' from pfile = 'xxxx'
19. Shut down the instances of the two nodes. Add databases and instances to the CRS so that you can use the CRS command to monitor and use the srvctl command to start and stop database instances:
Srvctl add database-D dbname-o $ ORACLE_HOME-y Manual
Srvctl add instance-D dbname-N node name 1-I Instance name 1
Srvctl add instance-D dbname-N node name 2-I Instance name 2
Here, Instance name 1 and Instance name 2 should be consistent with the oracle_sid of the two nodes respectively.
Note: In version 10.2.0.1, the instance depends on the VIP. Therefore, if a node such as the NIC is down or the VIP bug occurs, the instance will also be down. To avoid this, skip this step and do not add the instance to the CRS resource.
Now all work has been completed