Oracle 11gR2 RAC installation error-clock not synchronized

Source: Internet
Author: User

Oracle 11gR2 RAC installation error-clock not synchronized

System Environment:

Operating System: RedHat EL5

Cluster: Oracle GI (Grid Infrastructure)

Oracle: Oracle 11.2.0.1.0

: RAC System Architecture

For Oracle 11g RAC, the first thing to do is to build the GI (Grid Infrastructure) architecture.

--------------------------------------------------------------------------------

 

Installing Oracle 12C in Linux-6-64

 

Install Oracle 11gR2 (x64) in CentOS 6.4)

 

Steps for installing Oracle 11gR2 in vmwarevm

 

Install Oracle 11g XE R2 In Debian

 

--------------------------------------------------------------------------------

Error:

An error is reported when the root. sh script is executed in node2:

 

[Root @ xun2 install] #/u01/11.2.0/grid/root. sh

Running Oracle 11g root. sh script...

The following environment variables are set:

ORACLE_OWNER = grid

ORACLE_HOME =/u01/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in/usr/local/bin. Overwrite it? (Y/n)

[N]: y

Copying dbhome to/usr/local/bin...

The file "oraenv" already exists in/usr/local/bin. Overwrite it? (Y/n)

[N]: y

Copying oraenv to/usr/local/bin...

The file "coraenv" already exists in/usr/local/bin. Overwrite it? (Y/n)

[N]: y

Copying coraenv to/usr/local/bin...

Entries will be added to the/etc/oratab file as needed

Database Configuration Assistant when a database is created

Finished running generic part of root. sh script.

Now product-specific root actions will be saved med.

2014-07-05 02: 00: 09: Parsing the host name

2014-07-05 02: 00: 09: Checking for super user privileges

2014-07-05 02: 00: 09: User has super user privileges

Using configuration parameter file:/u01/11.2.0/grid/crs/install/crsconfig_params

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root '..

Operation successful.

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

Ohasd is starting

The CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node xun1, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

CRS-2672: Attempting to start 'ora. mdnsd' on 'xun2'

CRS-2676: Start of 'ora. mdnsd' on 'xun2' succeeded

CRS-2672: Attempting to start 'ora. gipcd 'on 'xun2'

CRS-2676: Start of 'ora. gipcd' on 'xun2' succeeded

CRS-2672: Attempting to start 'ora. gpnpd 'on 'xun2'

CRS-2676: Start of 'ora. gpnpd 'on 'xun2' succeeded

CRS-2672: Attempting to start 'ora.css dmonitor 'on 'xun2'

CRS-2676: Start of 'ora.css dmonitor 'on 'xun2' succeeded

CRS-2672: Attempting to start 'ora.css d' on 'xun2'

CRS-2672: Attempting to start 'ora. diskmon 'on 'xun2'

CRS-2676: Start of 'ora. diskmon' on 'xun2' succeeded

CRS-2676: Start of 'ora.css d' on 'xun2' succeeded

CRS-2672: Attempting to start 'ora. ctssd 'on 'xun2'

CRS-2674: Start of 'ora. ctssd 'on 'xun2' failed

CRS-4000: Command Start failed, or completed with errors.

Command return code of 1 (256) from command:/u01/11.2.0/grid/bin/crsctl start resource ora. ctssd-init-env USR_ORA_ENV = CTSS_REBOOT = TRUE

Start of resource "ora. ctssd-init-env USR_ORA_ENV = CTSS_REBOOT = TRUE" failed

Failed to start CTSS

Failed to start Oracle Clusterware stack

View logs:

[Root @ xun2 ctssd] # more octssd. log

Oracle Database 11g Clusterware Release 11.2.0.1.0-Production Copyright 1996,200 9 Oracle.

Ll rights reserved.

01:36:39. 677: [CTSS] [3046594240] Oracle Database CTSS Release 11.2.0.1.0 Product

Ion Copyright 2006,200 7 Oracle. All rights reserved.

01:36:39. 677: [CTSS] [3046594240] ctss_scls_init: SCLs Context is 0x88205f0

01:36:39. 685: [CTSS] [3046594240] ctss_css_init: CSS Context is 0x8820698

01:36:39. 686: [CTSS] [3046594240] ctss_clsc_init: CLSC Context is 0x8820fd8

01:36:39. 686: [CTSS] [3046594240] ctss_init: CTSS production mode

01:36:39. 686: [CTSS] [3046594240] ctss_init: CTSS_REBOOT = TRUE. Overriding 'reboot

'Argument as if 'ssd SSD reboot' is executed. Turn on start up step sync.

01:36:39. 695: [CTSS] [3046594240] sclsctss_gvss2: NTP default pid file not found

01:36:39. 695: [CTSS] [3046594240] sclsctss_gvss8: Return [0] and NTP status [1].

01:36:39. 695: [CTSS] [3046594240] ctss_check_vendor_sw: Vendor time sync software

Is not detected. status [1].

01:36:39. 695: [CTSS] [3046594240] ctsscomm_init: The Socket name is [(ADDRESS = (PR

OTOCOL = tcp) (HOST = xun2)]

01:36:39. 772: [CTSS] [3046594240] ctsscomm_init: Successful completion.

01:36:39. 772: [CTSS] [3046594240] ctsscomm_init: PORT = 31165

01:36:39. 772: [CTSS] [3020295056] CTSS connection handler started

[CTSS] [3009805200] clsctsselect_mm: Master Monitor thread started

[CTSS] [2999315344] ctsselect_msm: Slave Monitor thread started

01:36:39. 772: [CTSS] [2988825488] ctsselect_mmg: The local nodenum is 2

01:36:39. 776: [CTSS] [2988825488] ctsselect_mmg2_5: Pub data for member [1]. {Ver

Sion [1] Node [1] Priv node name [xun1] Port num [53367] SW version [186646784] Mode [0x40]}

01:36:39. 779: [CTSS] [2988825488] ctsselect_mmg4: Successfully registered with [C

TSSMASTER]

01:36:39. 779: [CTSS] [2988825488] ctsselect_mmg6: Receive reconfig event. Inc num

[2] New master [2] members count [1]

01:36:39. 780: [CTSS] [2988825488] ctsselect_mmg8: Host [xun1] Node num [1] is

Master

01:36:39. 781: [CTSS] [2988825488] ctsselect_sm2: Node [1] is the CTSS master

01:36:39. 782: [CTSS] [2988825488] ctssslave_meh1: Master private node name [xun1]

01:36:39. 782: [CTSS] [2988825488] ctssslave_msh: Connect String is (ADDRESS = (PROT

OCOL = tcp) (HOST = xun1) (PORT = 53367 ))

[Clsdmt] [2978335632] Listening to (ADDRESS = (PROTOCOL = ipc) (KEY = xun2DBG_CTSSD ))

01:36:39. 783: [clsdmt] [2978335632] PID for the Process [24020], connkey 11

01:36:39. 783: [CTSS] [2988825488] ctssslave_msh: Forming connection with CTSS mas

Ter node [1]

01:36:39. 784: [clsdmt] [2978335632] Creating PID [24020] file for home/u01/11.2.0/

Grid host xun2 bin ctss to/u01/11.2.0/grid/ctss/init/

01:36:39. 786: [clsdmt] [2978335632] Writing PID [24020] to the file [/u01/11.2.0/gr

Id/ctss/init/xun2.pid]

01:36:39. 786: [CTSS] [2988825488] ctssslave_msh: Successfully connected to master

[1]

01:36:39. 827: [CTSS] [2988825488] ctssslave_swm: The magn.pdf [228530967053 usec

] Of the offset [-228530967053 usec] is larger than [86400000000 usec] sec which is the CTSS l

Imit.

01:36:39. 827: [CTSS] [2988825488] ctsselect_mmg9_3: Failed in clsctsselect_select

_ Mode [12]: Time offset is too much to be corrected

01:36:40. 582: [CTSS] [2978335632] ctss_checkcb: clsdm requested check alive. Retu

Rns [1, 40000050]

01:36:40. 582: [CTSS] [2988825488] ctsselect_mmg: CTSS daemon exiting [12].

01:36:40. 582: [CTSS] [2988825488] CTSS daemon aborting

View the clock of two nodes:

[Root @ xun2 ctssd] # date

Sat Jul 5 02:06:09 CST 2014

[Root @ xun2 ctssd] # date 0707173614

Mon Jul 7 17:36:00 CST 2014

The time difference between the two nodes is very long, causing CRS time synchronization to fail!

For more details, please continue to read the highlights on the next page:

  • 1
  • 2
  • Next Page

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.