1. Graphical interface, I use the software is Mobaxterm Personal Edition,
Direct SSH [email protected]_server, then./runinstaller will be able to pop up a graphical interface, of course, there are other tools such as VNC can be graphical, this is not mentioned here, the following directly started installation.
2. About software downloads, download on MoS
installation Type |
Zip File |
Oracle Database (includes Oracle Database, Oracle RAC, and Deinstall) Note:you must download both zip files to install Oracle Database Enterprise Ed Ition. |
p21419221_121020_ platform _1of10.zip platform _ 2of10.zip |
Oracle Database SE2 (includes Oracle Database SE2, Oracle RAC, and Deinstall) Note:you must download both zip files to install Oracle database SE 2. |
p21419221_121020_ platform _3of10.zip p21419221_121020_ platform _4of10.zip |
Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware, and Oracle Restart) Note:you must download both zip files to install Oracle Grid Infrastructure. |
p21419221_121020_ platform _5of10.zip
p21419221_121020_ platform _6of10.zip
|
Oracle Database Client |
p21419221_121020_ platform _7of10.zip
|
Oracle Gateways |
p21419221_121020_ platform _8of10.zip
|
Oracle Examples |
p21419221_121020_ platform _9of10.zip
|
Oracle GSM |
p21419221_121020_ platform _10of10.zip
|
3. Installation
After WE TEST successfully,
We run Cluvfy pre crsinst,refer to below:
./runcluvfy.sh Stage-pre Crsinst-n Node1_hostname,node2_hostname
..... (Log output)
Pre-Check for Cluster services setup was successful.
After running the runcluvfy, proceed with the installation.
Create an OCR-related disk group here
Here we only create Ocr_vte DG
Ocr_vte01,02,03 These three disks is from the last step, we use the Createdisk to create.
And in this step,i met a Bug,which if I named the DG ocr_vote,it would hit the bug.
Case 2.5 Same storage sub-system is shared by different clusters and same diskgroup name exists in more than one cluster
<adr_home>/crs/<node>/crs/trace/ocrdump_<pid>.trc
2015-07-17 16:57:00.532160:ocrraw: AMDU-00211: Inconsistent disks in DiskGroup OCR
Solution:
The issue was investigated on bug 21469989, the cause is that multiple clusters be has the same diskgroup name and see ing the same shared disks, the workaround is to change diskgroup name for the new cluster.
An example'll be is that both cluster1 and Cluster2 is seeing the same physical disks/dev/mappers/disk1-10, Disk1-5 is a Llocated to Cluster1 and disk6-10 is allocated to Cluster2, however, both cluster is trying to use the same DiskGroup NA Me Dgsys.
Ref:bug 21469989-clsrsc-507 ROOT. SH failing on NODE 2 when CHECKING GLOBAL CHECKPOINT
The bug encountered here is in the name of the OCR disk group, if named Ocr_vote, in the subsequent run root.sh time, in the second node run root.sh has been failed, reported that the fault is AMDU-00211: Inconsistent disks in DiskGroup OCR. So the back of the reload when it is not named Ocr_vote, named Ocr_vte.
After the SR has been given to Oracle, the analysis was met with the above bug.
The process of analyzing this bug will be written in an analytical article after the series is largely completed.
First make sure that the first node runs successfully and then goes to the second node.
Only log of Node 2 is attached here.
-bash-4.1$ sudo/u01/app/12.1.0/grid/root.sh
The following environment variables is set as:
Oracle_owner= Oracle
Oracle_home=/u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "Dbhome" already exists in/usr/local/bin. Overwrite it? (y/n)
[N]:
The file "Oraenv" already exists in/usr/local/bin. Overwrite it? (y/n)
[N]:
The file "Coraenv" already exists in/usr/local/bin. Overwrite it? (y/n)
[N]:
Creating/etc/oratab file ...
Entries'll be added to The/etc/oratab file as needed by
Database Configuration Assistant When a database is created
Finished running generic part of root script.
Now product-specific root actions would be performed.
Using configuration parameter file:/u01/app/12.1.0/grid/crs/install/crsconfig_params
2017/01/11 03:05:04 clsrsc-4001:installing Oracle Trace File Analyzer (TFA) Collector.
2017/01/11 03:05:28 clsrsc-4002:successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/01/11 03:05:29 Clsrsc-363:user ignored prerequisites during installation
OLR initialization-successful
2017/01/11 03:06:43 clsrsc-330:adding clusterware entries to file ' oracle-ohasd.conf '
Crs-4133:oracle High Availability Services have been stopped.
Crs-4123:oracle High Availability Services have been started.
Crs-4133:oracle High Availability Services have been stopped.
Crs-4123:oracle High Availability Services have been started.
crs-2791:starting shutdown of Oracle high availability services-managed resources on ' Node2 '
。。。。。。。。。。。。。。。。 Omitted
Crs-4123:oracle High Availability Services have been started.
2017/01/11 03:14:22 clsrsc-343:successfully started Oracle clusterware stack
2017/01/11 03:14:37 clsrsc-325:configure Oracle Grid Infrastructure for a Cluster ... succeeded
Verify the grid installation
./CRSCTL Stat Res-t
CRSCTL Check CRS
./CRSCTL Check Cluster-all
./cluvfy Stage-post Crsinst-n Node1_name
./cluvfy Stage-post Crsinst-n Node2_name
Post-check for Cluster services setup was successful.
Crsctl Check Cluster–all
Olsnodes–n
Ocrcheck
Crsctl Query CSS Votedisk
After some of the above commands are viewed, the cluster installation is successful.
======================ended======================
[Original]zero downtime using Goldengate for Oracle 12C Upgrade Series fourth: Cluster installation