Grid plug-and-play (GPNP)
Grid plug-and-play is used to help administrators maintain clusters. Some manual operation steps required to add or delete nodes can now be automatically implemented by GPNP.
GPNP is not a separate concept. It relies on the following features: storing cluster information in an xml configuration file, cluster time synchronization (ctss), and grid Naming Service (GNS ); single client access name (SCAN) and server pool (server pools ).
GPNP defines metadata network interfaces for public and private network connections, ASM spfile, and CSS voting disks. Profile and XML files are protected by wallet to avoid modification. If you need to manually modify the profile, you must first use $ grid_home/bin/gpnptool to contact the signature, then modify it, and then use the same tool to sign it again. When you use a cluster management tool (such as oifcfg), the profile is automatically updated without administrator intervention.
When NTP is not enabled, the ctss background process is responsible for synchronizing the time between cluster nodes, which can remove the dependency on the NTP server, however, we recommend that you use NTP whenever possible -- otherwise, the time of all nodes may be consistent but inaccurate.
Before Oracle 11.2, you must register the public IP and virtual IP of the node to the DNS server to connect the client to the database normally and support load balancing. Some cluster maintenance, such as adding or deleting nodes, must be modified on DNS. If such operations happen frequently, it will make a burden. The grid Naming Service removes the ing between IP addresses and names from the DNS server and submits them to the cluster software for management. Strictly speaking, cluster software runs a small Domain Name Service and listens on another virtual IP address using subdomain delegation. Simply put, you have created a new subdomain for your domain name (example.com) (for example, ebsprod) and specified a Root Domain Name Server to map all pairs of subdomains (ebsprod.example.com) to gns. In subsequent installation, the system will require the public network and private network information, rather than the virtual IP address and name. The address used by GNS comes from the Dynamic Host Configuration Protocol (DHCP) in the public network ). The following table provides an overview of the addresses and their usage, default names, and detailed information used for software tracking and analysis:
We recommend that you define private IP addresses in the/etc/hosts file to prevent others or other applications from using them.
In addition, GNS is optional during installation and is not yet mature enough. Some GNS-related bugs have been found.
Another feature mentioned above is scan. Scan helps us abstract the number of nodes from the client connection, and the increase and decrease of nodes are completely transparent, because scan is more relevant to the cluster than the database.
In addition, a server pool is introduced to simplify the increase and decrease of RAC database instances.
Server pools)
The server Pool provides a new method to shape resources in a cluster. It allows you to divide a cluster into multiple logical units, which is useful in a shared environment. All nodes in the cluster of version 11.2, whether explicit or implicit, are part of the server pool. By default, two pools are generated after a new installation: Free pool and generic pool. The general pool is used for backward compatibility, it stores databases earlier than version 11.2 or administrator-managed databases in version 11.2. All unspecified nodes are allocated to the free pool.
The server pool is exclusive to each other and contains some attributes, such as the maximum and minimum number of nodes, importance, and name. The importance attribute of the server pool is used to ensure that low-priority workloads do not beat high-priority workloads to obtain resources. It is possible to re-allocate servers from one pool to another, which will be interesting in capacity management. The cluster software can automatically learn the minimum size requirement of the server pool from the mobile servers in other server pools.
The server pool is accompanied by a new method for busy RAC databases. Before Oracle 11.2, the Administrator is responsible for adding or deleting instances from the RAC database, including creating and enabling public online redo log threads and undo tablespace. The server pool (and OMF used in ASM) automates these operations through policy-managed databases. Administrator-managed databases are all managed by the database administrator, as mentioned in the name. In other words, it is the RAC database before Oracle 11.1. Policy-managed databases use automation to add and delete instances and services. The number of nodes started by the database managed by the policy is configured by the base of the server pool. In other words, if you need a new instance, you only need to allocate a new node to the server pool of the database, and Oracle will perform the rest of the work.
In combination with the server pool, grid infrastructure introduces another feature called role separated management ). In a shared environment, administrators are restricted when managing their respective server pools. The access control lists is used to allocate access to resources. A new role named Cluster Administrator is introduced here ). By default, grid infrastructure software owner "Grid" and root users are fixed cluster administrators. You can add new system users as cluster administrators. each user has several resources, types, and permissions on the server pool. Separation of duties can now be implemented at the cluster level. Note that the permissions of grid and root users are high.
ACFs Oracle 11.2 extends ASM, which is not only used to store database-Related File architectures, but also provides POSIX-compatible cluster file systems. This file system is called ACFs. POSIX compatibility means that all operating system tools we use in ext3 and other file systems can be used in ACFs. A space-saving read-only and copy-on-write snapshot mechanism is also available, and a maximum of 63 snapshots are allowed. ACFs uses a 64-bit internal architecture, which is a file system that can be backed up online (Journal File System). It uses metadata checksum to ensure data integrity. ACFs allows you to adjust the size online and use all the services provided by ASM, or even the I/O distribution. ACFs solves a problem in RAC usage. The database directory objects and external objects can point to the ACFs file system, and all the external data in the cluster looks the same in each node. In reality, users who use external tables must ensure that they can connect to the correct instance to access basic data in external tables or directory objects. Through ACFs, you no longer need to connect to a specific node. This problem is solved when the file system is visible to all nodes. ACFs also illustrates a problem: attaching a Cluster File System (such as ocfs) may cause some problems on the RAC system, because it means the existence of the Second Batch of Heartbeat processes. ACFs is not limited to RAC. Another scenario that may use ACFs is the Web server group protected by the grid infrastructure high-availability architecture, you can use ACFs to export the web site environment or application code to the root directory of the server. ACFs can store database binary files, bfiles, parameter files, trace files, logs, and any type of user application data. Database files are not supported in ACFs, and Oracle does not allow you to create them in the ACFs Mount directory. Then, Oracle explicitly supports installing the Oracle binary file in ACFs as a shared directory. the mounting of this file system can be merged into the cluster software, make sure that the file system is available before you try to start the database. ACFs uses the ASM dynamic volume Management (advm) service, which provides the volume Management Service and a standard device driver interface to its file system customers, such as ACFs and ext3. The ACFs file system is created on the advm volume (which is a special ASM file type. Describes the relationships between ASM disk, ASM disk group, dynamic volume, and ACFs file systems.
Management of ACFs and advm volumes is integrated into enterprise manager, asmca, and command line tools (such as SQL * Plus.
Oracle restart, also known as single instance high availability. Oracle restart continues the tracking of resources, such as (Single Instance) databases, ASM instances and corresponding disk groups, listeners and other node applications (such as Oracle Notification Service ). It also uses Oracle High Availability Service (ohas) and its sub-processes to monitor the status of registered resources and restart the failed components as needed. The original information of Oracle restart is stored in Oracle local registry. It also runs a cssd routine, so that you do not need to execute localconfig add to create it. The biggest benefit of Oracle restart is its integration with ONS and resource manager. We can use some commands we know in RAC. Therefore, the Database Administrator does not have to worry about the startup script. Oracle restart will open the database when the server starts. Some dependencies defined in Oracle restart can ensure that the database is not started before ASM, and so on. In the past, I often forgot to enable the listener. Now I don't have this problem any more. Oracle restart reduces the number of recommended Oracle home to one. You only need a grid infrastructure directory that contains the ASM and RDBMS binary files. Unlike grid infrastructure, Oracle restart can share oracle_base with database software. The installation process is similar to that of the cluster, except that the file is not copied to a remote location. Another benefit is that it does not need to define the path of the OCR and voting disk. Initialize OLR by executing the root. Sh script and create an ASM instance on the specified disk group. After installing Oracle restart, you can see the following resource list on the system: You can see that the ASM disk is a part of the resource and can be used to mount and detach it more conveniently. The disk is automatically registered after the first attachment. You do not need to modify the asm_diskgroups initialization parameters. After Oracle restart is configured, you can add a database as in RAC, but there is a major difference: you do not need to register any instances. The database can be configured through their related disk groups to create dependencies: If the disk group is not mounted when the database is started, Oracle restart will first retry mounting the disk group. Use srvctl to define services. Do not set the initialization parameter service_names to specify database services. By default, Oracle restart does not instantiate ons. Adding this background process is helpful when you use data guard broker (which can send a fan event to the client for failover. Fan events are also used in up and down events, but the lack of high availability of a single instance of Oracle limits the use of this method.
The scan listener is a new feature that simplifies access to cluster databases. Before Oracle 11.2, tnsnames. in the multi-node RAC database entry in the ora file, all nodes are involved in the address_list section, just as listed below: Adding and deleting nodes must change address_list. In a good centralized management environment, this may not be a problem; however, in some environments, the client is distributed on many application servers, this operation takes a long time and is prone to errors. The use of scan addresses solves this problem-several scan virtual IP addresses are used to replace all node IP addresses. Using scan, the preceding entries can be simplified to the following (in this example, scan is scanqacluster.example1.com): at least one (preferably three) of the prerequisites for installing or upgrading grid infrastructure) unused IP addresses in the same subnet as the public network must be allocated and registered in the DNS, or, if you use GNS, the gns background process will assign three IP addresses to the IP address range provided by the DHCP server. DNS uses the robin fashion method to resolve the scan names corresponding to these IP addresses. Make sure that reverse lookup is correct. Oui creates some new entities, called scan listeners, together with scan virtual IP addresses. Scan listeners will be registered to the Local Database Listener. A scan listener and a scan virtual IP address constitute a pair of resources. If a node fails, it will both failover to another node. If necessary, you can use server control tools to manage scan listeners and IP addresses. Scan listeners are responsible for Load Balancing during connection. They can place connections on the node with the lightest load. The following is a graph
Demonstrate the use of scan. Assume that there is a three-tier application architecture and a manual (non-GNS) configuration. In other words, the application server connects the user to scan scanname.example.com on behalf of the user, and then contacts the DNS server to resolve the scan, DNS returns one of the three IP addresses (which helps to distribute the load to the three scan listeners). The connected scan listener forwards the request to the local listener of the least load node, provides services requested by the client. At this point, it is no different from the original client to resolve the virtual IP address of the node and create a connection.
The use of scan addresses is not mandatory. If you do not want to connect to a database managed by a policy, you can continue to use the previous connection string.
Reprinted: http://blog.sina.com.cn/s/blog_5fe8502601016atb.html