"Oracle Cluster" Oracle DATABASE 11G RAC detailed tutorial how RAC works and related components (iii)

Source: Internet
Author: User
Tags failover joins log log taf

How RAC Works and related components (iii)

Overview: write down the original intent and motivation of this document, derived from the Oracle Basic operating Manual of the previous article. The Oracle Basic Operations Manual is a summary of the author's vacation on Oracle's foundational knowledge learning. Then form a summary of the system, a review review, the other is easy to use. This document is also derived from this text. Before reading the Oracle RAC installation and use tutorial, I will first comb the whole idea and formation of this article. Due to the different levels of knowledge storage for readers, I will start with the preparation and planning of Oracle RAC before installing the deployment of Oracle RAC. Began with the guidance of Dr. Tang, the database cluster configuration installation, before and after 2, 3 months of groping. A lot of problems are encountered in the middle. This document will also be documented. This article original/finishing, reproduced please mark the original source:RAC working principle and related components (iii)

Bai Ningsu July 16, 2015

How RAC works and related components

Oraclerac is a configuration-wide extension of multiple single instances, implemented by two or more nodes (instances) using a common shared database (for example, one database installs multiple instances and opens). In this case, each individual instance has its own CPU and physical memory, as well as its own SGA and background processes. Compared to traditional Oracle instances, there is a significant difference between the system Clobal AREA,SGA and the background process. The biggest difference is that one more GRD,GRD memory block is primarily a record of how many cluster databases and system resources This RAC has, as well as information about the data blocks, because in the RAC schema, each chunk has a copy in each SGA, and the RAC must know the location of the data blocks, version, distribution and current status, this information is stored in the GRD, but GRD is only responsible for the storage is not responsible for management, management responsibility to the background process GCS and GES. Multiple instances of Oracle access to a common shared database. Each instance has its own SGA, PGA, and background processes, which should be familiar because in a RAC configuration, each instance will need to be supported by these background processes. There are several ways to understand how RAC works and how it operates.

(i) SCN

The SCN is a mechanism that Oracle uses to track the sequencing of changes within a database, and it can be imagined as a high-precision clock, with an SCN number for each redo log entry and the Undo Data block,data Block. Oracle's Consistent-read, Current-read,multiversion-block are all dependent on the SCN implementation. In RAC, there is GCS responsible for the global maintenance of the SCN, the default is the Lamport SCN generation algorithm, the approximate principle is: the communication between all nodes in the content of the SCN, each node to receive the SCN and the local SCN comparison, if the local SCN small, then Adjust the SCN and receive the same, if there is not much communication between the nodes, but also actively communicate with each other on a regular basis. So even if the node is idle, there will be some Redo log generated. There is also a broadcast algorithm (broadcast), the algorithm is after each Commit operation, the node to the other nodes to broadcast the SCN, although this way will cause a certain load on the system, but to ensure that each node after commit can immediately see SCN. Both of these algorithms have advantages and disadvantages, Lamport Although the load is small, but there is a delay between the nodes, the broadcast although the load, but there is no delay. The Oracle 10g RAC defaults to the broadcast algorithm, which can be seen from the Alert.log log: Picked broadcast on commit scheme to generate SCNS

(ii) Ges/gcs principle of RAC

The

       Global Queue Service (GES) is primarily responsible for maintaining the consistency of the dictionary cache and the library cache. The dictionary cache is a cache of data dictionary information stored in an instance's SGA for high-speed access. Because the dictionary information is stored in memory, modifications to the dictionary (such as DDL) on a node must be propagated immediately to the dictionary cache on all nodes. GES is responsible for dealing with the above situation and eliminating the differences between instances. For the same reason, in order to parse the SQL statements that affect these objects, the library cache locks on the objects in the database are removed. These locks must be maintained between instances, and the global queue service must ensure that there are no deadlocks between multiple instances requesting access to the same object. The Lmon, LCK, and LMD processes work together to implement the functionality of the Global queue service. GES is an important service that adjusts other resources among nodes in a RAC environment, in addition to the maintenance and management of the data block itself (completed by GCS). To ensure synchronization of the instances in the cluster, two virtual services are implemented: the Global Queuing Service (GES), which controls access to locks.

Global Memory Service (GCS), which controls access to blocks of data. GES is an extension of the Distributed lock Manager (DLM), which is a mechanism that can be used to manage locks and blocks of Oracle parallel servers. In a clustered environment, you need to restrict access to database resources that are protected by latches or locks in a single instance database. For example, objects in the database dictionary memory are protected by hidden locks, while objects in the library cache must be protected by a pin when referenced. In a RAC cluster, these objects represent resources that are protected by a global lock. GES is a complete RAC component that is responsible for communicating with the instance global lock in the cluster, with each resource having a master node instance that records its current state. Also, the current state of the resource is recorded on all instances that are interested in the resource. GCS, which is another RAC component that coordinates access to blocks of data between different instances. Access to these blocks and the GRD are recorded in the global catalog (the Global Directory), which is a virtual memory structure that uses expansion in all instances. Each block has a master instance, which is responsible for managing the access to the GSD, which records the current state information of the block. GCS is the mechanism that Oracle uses to implement Cache fusion. Blocks and locks managed by GCS and GES are called resources. Access to these resources must be reconciled across multiple instances of the cluster. This coordination occurs at both the instance level and the database level. Resource coordination at the instance level is called local resource coordination; the coordination of the database hierarchy is called global resource coordination.

The local resource coordination mechanism and the single-instance Oracle resource coordination mechanism are similar, including block-level access, space management, dictionary cache, library cache management, row-level locks, and SCN occurrences. Global resource coordination is for RAC, using additional memory components, algorithms, and background processes in the SGA. GCS and GES are designed in a way that is transparent to the application. In other words, you do not need to modify the application because the database is running on the RAC, and the parallel mechanism on the single-instance database is also reliable on the RAC.

Background processes that support GCS and GES use the private network heartbeat to communicate between instances. This network is also used by Oracle's cluster components and may be used by clustered file systems such as OCFS. GCS and GES run independently of the Oracle cluster components. However, GCS and GES rely on these cluster components to obtain the state of each instance in the cluster. If this information cannot be obtained from an instance, the instance will be closed. The purpose of this shutdown is to protect the integrity of the database, because each instance needs to know the situation of the other instances, which can better determine the coordinated access to the database. GES controls all library cache locks and dictionary cache locks in the database. These resources are local in a single-instance database, but become global resources in the RAC cluster. Global locks are also used to protect the structure of data and to manage transactions. Generally, transactions and table locks are consistent in a RAC environment or a single-instance environment.

Oracle's layers use the same GES capabilities to gain, transform, and release resources. When the database is started, the number of global queues is automatically calculated. GES uses background processes LMD0 and LCK0 to perform most of its activities. In general, various processes and local LMD0 background processes communicate to manage global resources. The local LMD0 background process communicates with the LMD0 process on other instances.

The LCK0 background process is used to obtain the locks required by the entire instance. For example, the LCK0 process is responsible for maintaining the dictionary cache lock. Shadow processes (service processes) communicate with these background processes through an AST (asynchronous trap) message. Asynchronous messages are used to prevent background processes from blocking, which are blocked while waiting for a reply from the remote instance. Background processes can also send BAST (asynchronous lock traps) to lock processes, which can require these processes to set the current hold lock to a lower-limit mode. A resource is a memory structure that represents a component in a database, and access to those components must be either restricted mode or serialized mode. In other words, this resource can only be accessed by a single process or by an instance in parallel. If the resource is currently in use, other processes that want to access the resource must wait in the queue until the resource becomes available. A queue is a memory structure that is responsible for parallelizing access to special resources. If these resources are only required by the local instance, then this queue can be obtained locally, without the need for synergy. However, if this resource is requested by the remote instance, then the local queue must become globalized.

Clusterware Architecture

In a stand-alone environment, Oracle is running on top of OS Kernel. OS Kernel is responsible for managing hardware devices and providing hardware access interfaces. Oracle does not directly manipulate the hardware, but instead has an OS Kernel instead of it to complete the call request to the hardware. In a clustered environment, storage devices are shared. OS Kernel is designed for standalone, and can only control access between multiple processes on a single machine. If you also rely on OS Kernel services, you cannot guarantee coordination between multiple hosts. There is a need to introduce additional control mechanisms in the RAC, which is the clusterware between Oracle and OS Kernel, which intercepts the request before the OS Kernel and then negotiates with Clusterware on the other nodes to finalize the upper-level request. Prior to Oracle 10G, the cluster components required by RAC were dependent on the hardware vendor, such as Sun,hp,veritas. From the Oracle 10.1 release, Oracle has launched its own cluster product. Cluster ready Service (CRS), from this RAC is not dependent on the cluster software with any vendor. In Oracle version 10.2, this product was renamed: Oracle Clusterware. So we can see that in the whole RAC cluster, there are actually 2 cluster environments, one is a cluster composed of Clusterware software, the other is a cluster composed of Database.

(i) main process of Clusterware

A) crsd--is responsible for high-availability operations of the cluster, and the managed CRS resources include databases, instances, monitors, virtual ip,ons,gds, or other operations, including startup, shutdown, monitoring, and failover. The process is managed and started by the root user. CRSD If there is a failure, it will cause the system to restart.

b) CSSD, manages the relationship of each node for inter-node communication, and the node notifies the cluster when it joins or leaves the cluster. The process is managed by the Oracle user. CSSD also automatically restarts the system when a failure occurs.

c) oprocd– cluster process Management-process monitor for the cluster. Used to protect shared data IO fencing.

D) run only in a cluster software state that is not using vendor

e) EVMD: Event detection process, run by Oracle User management

The

      Cluster ready Service (CRS, cluster readiness Service) is the basic program for managing highly available operations within a cluster. Anything that Crs manages is called a resource, which can be a database, an instance, a listener, a virtual IP (VIP) address, an application process, and so on. CRS manages these resources based on the resource configuration information stored in OCR. This includes startup, shutdown, monitoring, and failover (start, stop, monitor, and failover) operations. When the state of a resource changes, the CRS process generates an event. When you install RAC, the CRS process monitors instances of Oracle, listens, and so on, and starts these components automatically when a failure occurs. By default, the CRS process is restarted 5 times and is no longer attempted if the resource still fails to start. Event Management (EVM): publishes a background process for CRS creation events. Oracle Notification Service (ONS): Announcement and subscription services for fast application notification of communications (Fan:fast application Notification) events. RACG: Extended capabilities for Clusterware to support specific Oracle requirements and complex resources. It executes the server-side call script (server callout script) Process Monitor Daemon (OPROCD) When the FAN event occurs: This process is locked in memory for monitoring the cluster (cluster) and provides I/O protection (i/ ofencing). OPROCD performs its check, stops running, and if it wakes more than it wants, OPROCD resets the processor and restarts the node. A OPROCD failure will cause Clusterware to restart the node.

Cluster Synchronization Service (CSS): CSS cluster synchronization service, managing cluster configuration, who is a member, who comes, who walks, notifies members, is the basis of interprocess communication in a clustered environment. Similarly, CSS can be used to handle interactions between ASM instances and regular RDBMS instances in a single-instance environment. In a clustered environment, CSS also provides group services, which are the dynamic information about which nodes and instances form the cluster at any given time, as well as the name of the node and the static information of the node (which is modified when the node is added or removed). CSS maintains basic lock functionality within the cluster (although most locks are maintained with integrated distributed lock management within the RDBMS). In addition to performing other jobs, CSS is responsible for maintaining a heartbeat between nodes in the cluster and monitoring the split-brain failure of the polling disk. In the final phase of installing Clusterware, the root.sh script is required to execute at each node, which joins the 3 processes at the end of the/etc/inittab file, so that each time the system starts, Clusterware automatically starts, EVMD and CRSD two processes if an exception occurs, the system will automatically restart the two processes, if the CSSD process exception, the system will restart immediately.

Attention:

1, voting Disk and OCR must be saved on the storage device for each node access.

2, voting Disk, OCR and network is installed in the process or before the installation must be specified or configured. After the installation is complete, some tools can be configured and modified.

RAC Software Architecture

The RAC software structure can be divided into four parts.

    1. Operating system-related software
    2. RAC Shared disks Section
    3. Special background and instance processes in RAC
    4. Global buffer service and global queue service

(i) Operation System-dependent (OSD)

RAC accesses the operating system and some Cluster-related service processes through the relevant software of the operating system. The OSD software may be provided by Oracle (Windows platform) or by a hardware vendor (Unix platform). The OSD consists of three self-sections:

    • L The Cluster Manager (CM): Cluster monitor monitors inter-node communication and coordinates node operations through interconnect. It also provides a unified view of all nodes and instances in the CLUSTER. CM also controls the membership of the CLUSTER.
    • L The Node Monitor: node Monitor provides the status of various resources within the node, including nodes, interconnect hardware and software, and shared disks.
    • L The Interconnect Inter-node heartbeat (two heartbeat mechanisms, one via the network heartbeat of the private networks, and the other via voting disk heartbeat)

(ii) Real application Cluster Shard Disk Component

There is no difference between this part of the RAC and the components in a single-instance Oracle database. Includes one or more control files, some column online redo log files, optional archive log files, data files, and so on. Using a server parameter file in a RAC simplifies the management of the parameter file, and you can store global parameters and instance-specific parameters in the same file.

(iii) Real application cluster-specific Daemon and Instance Processes include the following sections:

    1. The Global Service Daemon (GSD): Runs a background process on each node to receive management messages from clients such as DBCA, EM, and to complete administrative tasks, such as instance startup and shutdown.
    2. Special instance process in RAC: Global Cache Service Processes (LMSN): Controls the traffic to the remote instance's messages and manages access to the global data block. It is also used to pass a BLOCK mapping between buffers of different instances.
    3. Global Enqueue Service Monitor (Lmon): Monitors resource interactions between global queues and clusters, performing global queue recovery operations.
    4. Global Enqueue Service Daemon (LMD): Manages global queue and global resource access. For each instance, LMD manages resource requests from the remote.
    5. Lock Processes (LCK): Manages non-block resource requests except Cache Fusion, such as data files, control files, data dictionary attempts, library and row Cache requests.
    6. Diagnosability Daemon (DIAG): captures the diagnostic data of the process failure in the instance.

(iv) The global Cache and global Enqueue Service

The global cache Service (GCS) and the Global Queue Service (GES) are integrated components of RAC that coordinate simultaneous access to shared databases and shared resources within the database.

GCS and GES include the following features:

    1. Application transparency.
    2. Distributed architecture
    3. Global resource Directory for distributed architecture: GCS and GES can still guarantee the integrity of the global resource directory as long as there is one node, even if one or more nodes fail.
    4. Resource control: GCS and GES Select an instance to manage all resource information, which is called Resource Master. GCS and GES evaluate and modify resource master periodically, based on the way data is accessed. This approach reduces network traffic and resource acquisition time.
    5. Interaction between GCS and GES with CM: GCs and GES are independent of CM. However, both GCS and GES depend on the state information of the instance on each node provided by CM. Once the information for an instance cannot be obtained, Oracle closes the unresponsive instance immediately to ensure the integrity of the entire RAC.
Cluster registration (OCR)

The problem with forgetfulness is that each node has a copy of the configuration information and the configuration information of the modified node is not synchronized. The solution Oracle uses is to put this configuration file on the shared storage, which is the OCR Disk. The configuration information of the entire cluster is saved in OCR, and the configuration information is saved as "Key-value". Prior to Oracle 10g, this file was called Server Manageability Repository (SRVM). In Oracle 10g, this part was redesigned and renamed OCR. During the installation of Oracle Clusterware, the installer prompts the user to specify the OCR location. And the user-specified location is recorded in/ETC/ORACLE/OCR. Loc (Linuxsystem) or/VAR/OPT/ORACLE/OCR. Loc (solarissystem) file. In the Oracle 9i RAC, the equivalent is the Srvconfig.loc file. Oracle Clusterware will read the OCR content from the specified location at boot time based on the content.

(i) OCR key

The entire OCR information is a tree-shaped structure with 3 large branches. are System,database and CRS respectively. There are many small branches under each branch. The information for these records can only be modified by the root user.

(ii) OCR process

Oracle Clusterware Storage Cluster configuration information in OCR, so the content of OCR is very important, all the operation of OCR must ensure the integrity of the OCR content, so in the ORACLE clusterware operation, not all nodes can operate the OCR Di SK. There is a copy of the OCR content in the memory of each node, which is called the OCR Cache. Each node has an OCR process to read and write the OCR Cache, but only one node of the OCR process can read and write the contents of the OCR Disk, which is called the OCR Master node. The OCR process for this node is responsible for updating the OCR Cache content of local and other nodes. All other processes that require OCR content, such as OCSSD,EVM, are called client processes that do not directly access the OCR Cache, but rather like OCR process to send requests, get content with OCR process, and if you want to modify OCR content, The OCR process of the node will also be submitted as a request by the OCR process of master node, which completes the physical reading and writing of the master OCR process and synchronizes the contents of all the nodes in the OCR Cache.

Oracle arbitration disk (voting disk)

Voting Disk This file is mainly used to record node member status, in the event of a brain fissure, the decision that partion gain control, the other partion must be removed from the cluster. You will also be prompted to specify this location when installing Clusterware. After the installation is complete, you can view the voting Disk location by following the command below. $Crsctl Query CSS Votedisk

Network connectivity for the cluster

One, the private network

Each cluster node is connected to all other nodes through a dedicated high-speed network, which is also known as cluster interconnect or high-speed interconnect (HSI). Oracle's cache Fusion technology uses this network to efficiently combine the physical memory (RAM) of each host into a single cache. Oraclecache Fusion allows any other instance to access this data by transmitting data stored in an Oracle instance cache on a private network. It also maintains data integrity and cache consistency by transmitting locks and other synchronization information in the cluster nodes. Private networks are typically built with gigabit Ethernet, but for high-capacity environments, many vendors offer proprietary solutions designed for Oracle RAC with low latency and high bandwidth. Linux also provides a way to bind multiple physical NICs to a single virtual NIC (not covered here) to increase bandwidth and increase availability.

Second, the public network

To maintain high availability, a virtual IP address (VIP) is assigned to each cluster node. If a host fails, the IP address of the failed node can be reassigned to an available node, allowing the application to continue to access the database through the same IP address.

Third, Virtual LP (VIP)

That is, the virtual Ip,oracle recommends a client connection via the specified virtual IP connection, which is also a feature of the oracle10g new launch. Its essential purpose is to achieve no pauses in the application (although there is still a small problem, but very close to the target). The user connects to the virtual IP, the IP is not tied to the network card, but is managed by the Oracle process, and once a user connects to the virtual IP instance, Oracle automatically maps the IP to a healthy instance, which does not affect the user's access to the database, nor does it require the user to modify the application. Oracle's TAF is built on VIP technology. IP and VIP differences in the WITH: IP is the use of TCP layer timeouts, the VIP utilizes the immediate response of the application layer. VIP It is a floating IP. When a node is having problems, it automatically goes to the other node.

Transparent application Switching (TAF)

Transparent application failover (Transport application FAILOVER,TAF) is an Oracle data provider that is commonly used in RAC environments and, of course, in the master-slave hot-standby environment of data Guard and traditional HA implementations. The Transparent and Failover in TAF point out two major features of this highly available feature:

    1. The TAF is used for failover, that is, switching. When an Oracle connected session is not available due to a database failure, the session can automatically switch to the other available nodes in the RAC, or switch to Standby above, or switch to another available node in HA mode.
    2. TAF failover is transparent to the application, and the application system does not require special processing to automatically fail over.

But is the TAF perfect? Is the use of TAF, the application can really seamlessly switch it? Are there any other requirements for applications and databases? To answer these questions, we need to understand and master the TAF in a comprehensive way. I always think that to use a good thing, first of all have to grasp the work behind the principle and mechanism of this thing. First look at Failover. There are two kinds of Failover, one is connection Failover, the other is runtime Failover. The former is the role of the application (client) when connecting to the database, if due to network, instance failure, and other reasons, the connection can be connected to other instances in the database. The latter's role is that for an already working session (that is, the connection has been established), if the instance of the session aborts abnormally, the application (the client) can connect to the other instance of the database (or to the standby library).

Connection Load Balancing

Load Balancing (load-banlance) refers to the load balancing of the connection. The load balancing of RAC mainly refers to a new session connecting to the RAC database, according to the CPU load of the server node to determine which node the connection is to be connected to work. Oracle RAC can provide dynamic Data services, load balancing is divided into two kinds, one is based on the client connection, and one is server-based.

The principle and characteristics of VIP

Oracle's TAF is built on the technology of VIPs. The difference between IP and VIP is that IP is using TCP layer timeout, and the VIP is using the immediate response of the application layer. VIP is a floating IP, and when a node has a problem, it automatically goes to the other node. Suppose there is a two-node RAC that has a VIP on each node during normal operation, that is, VIP1 and VIP2. When node 2 fails, such as an abnormal relationship. The RAC will do the following:

(a) CRS after detecting RAC2 node anomaly, will trigger the clusterware reconstruction, finally the RAC2 node culling cluster, the node 1 is composed of a new cluster.

(b) The Failover mechanism of RAC will transfer the VIP of Node 2 to Node 1, then there are 3 IP addresses on the public network card of Node 1: vip1,vip2, public IP1.

(iii) User connection request to VIP2 will be routed to Node 1 by IP Layer Routing

(d) Since there is a VIP2 address on Node 1, all packets will pass through the routing layer, Network layer, and transport layer smoothly.

(v) However, only two IP addresses of VIP1 and public IP1 are monitored on Node 1. Does not listen to the VIP2, so the application layer does not have a corresponding program to receive this packet, this error is immediately captured.

(vi) The client is able to receive this error immediately, and then the client re-initiates the connection request to VIP1. VIP Features:

    • L? The VIP is created through the VIPCA script.
    • L? The VIP is registered to OCR as the Nodeapps type of CRS Resource and is maintained by CRS.
    • L? The VIP is bound to the node's public network card, so the public network card has 2 addresses.
    • L? In the event of a node failure, CRS transfers the VIP of the failed node to the other node.
    • L? The Listener of each node listens to the public IP and VIP on the public network card at the same time.
    • L? The tnsnames of the client. Ora typically configures the VIP that points to the node.
Log system

Redo Thread

There are multiple instances in the RAC environment, each of which needs to have its own set of Redo log files to log. This set of Redo Log is called redothread, in fact, the single instance is also Redo thread, but this word is rarely mentioned, each instance a set of Redo Thread design is to avoid the resource competition caused by the performance bottleneck. Redo thread has two kinds, one is Private, create syntax ALTER DATABASE add logfile ... Thread n; the other is public, create syntax: ALTER DATABASE add logfile .... .. The thread parameter is set for each instance in the RAC, and the default value of this parameter is 0. If this parameter is set, the default value of 0 is used, and the instance is launched with the public Redo thread selected, and the instance uses the Redo thread in an exclusive manner. Each instance in the RAC requires a Redo Thread, each Redo log thread requires at least two Redo log group, each log group member should be of equal size, and no group preferably has more than 2 members, which should be placed on different disks to prevent a single point Disabilities

Note: In a RAC environment, the Redo Log group is numbered at the entire database level, and if instance 1 has 1 and 22 log groups, then the log set number for instance 2 should start at 3 and cannot be numbered. On a RAC environment, the online logs for all instances must be placed on the shared storage, because if a node shuts down unexpectedly, and the remaining nodes are crash recovery, the node that executes crash recovery must be able to access the connection log for the failed node. This requirement can only be met by placing the online logs on the shared storage.

Archive Log

Each instance in the RAC produces its own archive log, which is only used when the Media Recovery is executed, so the archive log does not have to be placed on the shared storage, and each instance can store the archive log locally. However, if you are backing up archive logs on a single instance or Media Recovery operations, and require that the node must have access to archive logs for all instances, there are a number of options for configuring the archive log under RAC Mirage.

    1. Using NFS

Use NFS to archive logs directly to storage, such as two nodes, each with 2 directories, arch1,arch2 corresponding to the archive logs generated by instance 1 and instance 2 respectively. Each instance is configured with an archive location, archived locally, and then hooked up to a local directory via NFS.

    1. Inter-instance archiving (cross Instance Archive CIA)

Inter-instance archiving (cross Instance Archive) is a variant of the previous approach and is a more common configuration method. Two nodes create 2 directories Arch1 and ARCH2 correspond to the archive logs generated by instance 1 and instance 2, respectively. Each instance is configured with a two archive location. Location 1 corresponds to local archive directory, location 2 corresponds to another instance

    1. Using ASM

Using ASM to archive logs to shared storage, only through the ASM provided by Oracle, hides the complexity above, but the principle is the same.

Trace Log

The auxiliary diagnostics for Oracle Clusterware can only be performed from log and trace. and its log system is more complex. Alert.log: $ORA _crs_home/log/hostname/alert. Log, this is the preferred view file.

Clusterware Background Process Log

    • L CRSD. Log: $ORA _crs_home/log/hostname/crsd/crsd. Log
    • L OCSSD. Log: $ORA _CRS_HOME/LOG/HOSTNAME/CSSD/OCSD. Log
    • L EVMD. Log: $ORA _CRS_HOME/LOG/HOSTNAME/EVMD/EVMD. Log

Nodeapp Log Location

$ORA _crs_home/log/hostname/racg/This is a log of Nodeapp, including ONS and VIPs, such as ORA. Rac1.ons.Log Tool Execution log: $ORA _crs_home/log/hostname/client/

Clusterware provides a number of command-line tools such as Ocrcheck, Ocrconfig,ocrdump,oifcfg, and Clscfg, and the logs generated by these tools are placed in this directory, as well as $oracle_home/log/hostname/ Client/and $ORACLE_HOME/LOG/HOSTNAME/RACG also have related logs.

Reference documents
    1. Oracle's three highly available cluster scenarios
    2. Introduction to cluster Concept: The Oracle Advanced Course--theoretical textbook
    3. Oracle one-off RAC Survival Guide
    4. Oracle 11gR2 RAC Management and performance optimization
Article Navigation
    1. Introduction to cluster concept (i)
    2. Oracle Cluster Concepts and principles (ii)
    3. How RAC Works and related components (iii)
    4. Cache Fusion Technology (IV)
    5. RAC special problems and combat experience (V)
    6. ORACLE one-to-G Version 2 RAC ready for use with NFS on Linux (vi)
    7. ORACLE ENTERPRISE LINUX 5.7 under Database 11G RAC cluster installation (vii)
    8. ORACLE ENTERPRISE LINUX 5.7 Databases 11G RAC database installation (eight)
    9. Basic test and use of database 11G RAC under ORACLE ENTERPRISE LINUX 5.7 (ix)

Note: This article original/finishing, reproduced please mark the original source.

"Oracle Cluster" Oracle DATABASE 11G RAC detailed tutorial how RAC works and related components (iii)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.