Grid infrastructure shared component
Grid infrastructure uses two types of shared devices to manage cluster resources and nodes: OCR (Oracle cluster registry) and voting disk. Oracle 11.2 introduces a new file called Oracle local registry (OLR), which can only be stored locally. OCR and OLR
OCR is shared by all nodes, including all information about cluster resources and operation licenses required for grid infrastructure. To achieve sharing, OCR must be stored on bare devices, shared Block devices, cluster file systems like ocfs2, or ASM. In grid infrastructure, only the upgraded system supports OCR not managed by ASM. If it is a new installation, you must use the cluster file system or ASM. In rac10 and 11.1, OCR can have one image, but in rac10 and 11.2, five copies are added.
Grid infrastructure automatically backs up OCR once every 4 hours, and retains some backups for restoration. In rac11.1, an option is introduced to manually back up cluster registy. When the root user runs the diagnostic program, an additional integrity check is executed. Clusterware11.1 the Oracle universal installer simplifies the deployment of cluster Registry on shared Block devices. Prior to this, you need to manually move OCR to block devices. When you use a bare device in rac11.1 on Red Hat 4 or sles10, you must use udev to manually configure the bare device. Oracle
The configuration process is described in support. The methods for connecting to shared storage in a single path and multiple paths are different.
In some rare cases, OCR may be destroyed and needs to be restored from the backup. Depending on the severity of the destruction, it may be enough to restore from an image or backup. Only tools provided by Oracle can be used to manage and maintain OCR. If the content in OCR is directly dumped and modified, Oracle will not support configuration problems.
Introduce another cluster configuration file named OLR in Oracle 11.2. This file is copied separately in the grid infrastructure installation directory of each node. OLR stores the important security environment used by the ohas in the initial stage of cluster startup. The OLR and grid plug-and-play configuration files are used to locate the voting disk. If they are stored in ASM, the Discovery related strings in the profile of GPNP will be used by the cluster synchronization process to find them. After the cluster software is started, the cssd process starts the ASM instance to connect to the OCR file. However, their paths are stored in the/etc/OCR. Loc file, which is the same as that in rac11.1. Of course, if the voting file and OCR file are stored in a shared cluster file system, the ASM instance will not be started unless other resources need to use ASM.
Configure voting disks if a node fails to respond to heartbeat requests from other nodes within the specified time (countdown-threshold), the node will be kicked out of the cluster. Similar to OCR, voting disk and Its images must be stored in shared storage (3 Voting disks are supported in 11.1 and 15 are added in 11.2 ). Like OCR, grid infrastructure only supports bare devices on the upgraded system. newly installed only supports cluster file systems or ASM. Block devices and bare devices will not be supported in oracle12. Oracle strongly recommends using at least three voting disks at different locations. When you use ASM to manage voting disks, you need to pay attention to the redundancy levels of the disk groups and fault groups. Note that all copies of the voting disk are in one disk group. You cannot distribute the voting disks in multiple disk groups. When using an external redundant disk group, you can only have one voting disk. Using the normal redundancy level requires at least three fault groups to store three voting disks. The high redundancy level is more flexible and supports up to five voting
Disks.
Use ASM
ASM was introduced in oracle10.1. It is a logical volume manager that supports clusters in the Oracle physical database structure. Files that can be stored in ASM include control files, database files, and online redo logs (as well as spfile and archive logs ). Operating system files of any type cannot be stored until 11g R2.
The file types supported by ASM are different in different versions. The following is a list of 10.2 and 11.2 for reference and comparison:
10.2
11.2:
ASM is based on the concepts of ASM disk, failure groups, and ASM disk groups.
Several ASM disks form an ASM disk group. Similar to LVM, an ASM disk is equivalent to a physical volume in LVM. Unlike LVM, several ASM disks that share a common fault point (such as a disk controller) can form a failure group. An ASM disk group can be used to store physical database structures: data files, control files, redo logs, and other file types. Compared with the logical volume manager (LVM) in Linux, the logical volume is not created in the disk group. Instead, all the files in the database are logically grouped in the disk.
Group. ASM does not need a file system, which is why ASM is more advantageous than traditional LVM.
Grid infrastructure introduces the ASM Cluster File System (ACFs), eliminating the restrictions on storing general-purpose files. ASM uses the stripe-and-mirror-Everything method to provide optimal performance.
The use of ASM and ACFs is not restricted by the cluster; a single instance of Oracle can also benefit from it. Technically, Oracle ASM is applied as a special Oracle instance, which has its own SGA but does not have a continuous dictionary. In RAC, each cluster node has only one separate ASM instance. At startup, each instance detects the ASM disk group resources in grid infrastructure through the initialization parameters in the cluster software. Each instance will mount these disks. By granting the correct permissions (the access control list (ACLs) is introduced in asm11.2), databases can access their own data files. To use ASM, you need to apply OMF, which means different database file management methods. The initialization parameters in the RDBMS instance, such as db_create_file_dest and db_create_online_dest_n, and db_recovery_file_dest, specify the disk in which the related files are stored. To create a new file, OMF will be created in the following format: + diskgroupname/dbuniquename/file_type/file_type_tag.file.incarnation
Example: + Data/oradb/datafile/users.293.699134381
ASM allows you to perform many online operations. In asm11.1 and later versions, you can perform rolling update to minimize the impact on the database.
ASM operates on the bare partition level. To reduce the overhead of the product system, lvm2 logical volumes should be avoided. ASM is also supported on NFS. However, instead of directly mounting the directory provided by the file manager, you must use the zero-fill File Created by the DD tool as the ASM volume. When using NFS, You need to negotiate with the supplier to provide them with documents of best practices.
Environments with special requirements, such as massive databases with more than 10 TB of data, can benefit from the customizable disk size (extent) at the disk level. A general storage optimization technique includes using only the edge location of the disk, providing higher performance than using other locations. ASM's smart data distribution allows administrators to define hotspot areas with higher speeds and bandwidth. Frequently accessed files can be placed in these locations to improve performance. Hard drive manufacturers are about to launch a 4 K hard drive with a higher storage density and faster capacity. ASM is ready for this. It provides an attribute of the disk group, called sector size, which can be set to 512 bytes or 4 kb.
In most installations, a typical workflow is that the storage Administrator provides all nodes in the cluster for storing the ASM disk. The system administrator creates partitions for these new block devices and configures multiple paths, use asmlib or udev to mark the Block devices in these partitions as candidate disks. After being handed over to the database team, the Oracle administrator can configure the ASM disk and ASM disk groups. These operations can be completed online without restarting the server.
ASM disk. ASM disk is the basic unit of ASM. When an ASM backup disk is added to a disk group, metadata is written to its header, so that the ASM instance can recognize the disk and attach it to the disk group. Disk failures often occur in storage arrays. It is normal that some disks are used at high strength when they fail. In most cases, the disk array can restore faulty disk data through the image disk or parity Information Based on the protection level used. Disk faults in ASM are not frequent, because in most cases, Luns protected by disk arrays are used. However, if a disk in an ASM-protected disk group fails, the faulty disk needs to be replaced urgently to prevent it from being discarded. In ASM 11.1, a new parameter is introduced, which is disk repair time. This allows the Administrator to fix short-term disk faults without performing a global adjustment operation. When a disk is added or deleted from a disk group, it is re-adjusted to restrip the members in the disk group. Depending on the size of the ASM disk, this adjustment may take a long time. If the Administrator is lucky to bring the faulty ASM disk back to the disk group before the re-adjustment operation occurs, the disk group will be able to quickly restore to normal status, because only the region with data change needs to be applied (dirty
Region. Based on the use of the storage backend, The Lun can be protected by the raid level of the array, or it can be a set of unprotected storage (jbod ). **************************************** **************************************** **************************************** **************************************** ******************************* Asmlib and udev asmlib are both resolved fixed device names. In Linux, the device detection and enumeration sequence is not fixed. This is different from Solaris. For example, the device name (for example, c0t0d1p1) will not change unless a disk is physically moved from the array. Reconfiguring a storage array without multiple paths will cause a major problem in Linux: A device is originally displayed as/dev/SDA in the operating system and may be re-mapped to/dev/SDG after restart, it is only because the operating system detects that it is a little later than the last time it was started. The device name-based bare device ing is doomed to fail. First, let's look at the udev solution. The world-wide-ID (wwid) of a SCSI device will not change. A rule is used in udev to create a ing, it defines that the device/dev/raw/raw1 always points to the Lun with the scsi id xxxx. The main problem with udev is that its configuration is not intuitive and easy to use. Because udev cannot copy the configuration, the Administrator must maintain the udev configuration on each node in the cluster. (You can use udevinfo-Q path-N/dev/sda1 to view the udev device name corresponding to/dev/sda1. The path is under/sys) this problem does not occur if you configure a multi-path storage, because another software layer (such as the devicemapper-multipath package) or the software specified by the supplier creates a logical device. Asmlib provides another method. The asmlib Tool can be downloaded for free in the http://oss.oracle.com, which makes managing the ASM disk very simple. Asmlib consists of three RPM packages: one kernel module, the actual asmlib, and support tools. Before using a lun as an ASM disk, you can use the asmlib tool to mark it by adding the metadata information to the disk header, and then asmlib can identify this new Lun, use it as a possible candidate to add to the ASM disk group. When the disk is restarted, asmlib scans the information in the disk header to identify the ASM disk, regardless of what the physical device name becomes during startup. It ensures the stability of the device name and the cost is very low. Asmlib is a kernel module that allocates its own memory structure internally. It can be configured in a single path and multi-path mode. **************************************** **************************************** **************************************** **************************************** ********************************
The ASM disk group ASM disk has three levels of redundancy: External redundancy, General redundancy, and high redundancy. When an external redundant disk group is created, ASM puts the storage array responsible for data protection, no image will be made. It will strip 1 m in the SWAp Disk Area between the ASM disks in the disk group. Writing errors will force the ASM disk to be detached. This will cause serious consequences because the disk area on the disk does not have any available copies, and the entire disk group will become unavailable. At the general redundancy level, ASM writes strip and image to each disk area in one disk area, and another disk area is written to another fault group to provide redundancy. In asm11.2, a single file can be used for strip and image. By default, a two-way image is created. Common redundancy can tolerate the failure of an ASM disk in the disk group. High redundancy provides higher-level protection. By default, it provides strip and image, creates two additional copies of the primary disk, and can tolerate the failure of two ASM disks in the disk group.
Failure Group failure group is a logical disk group. When one of the components fails, the entire disk group will be unavailable. For example, a SCSI controller's disk is a failure group. If this controller fails, all disks are unavailable. In normal and high redundancy, ASM uses failure group to store image copies of data. If the configuration is not clear, each ASM disk will constitute its own failure group. The normal redundancy disk group must be composed of at least two failure groups and at least three high redundancy disk groups. Then, we recommend that you use more fail than the minimum value.
Group to provide additional data protection. ASM reads data from primary extent in an ASM disk group by default. In an extended distance cluster, if primary extent is on a remote storage array, it may cause performance problems. ASM 11.1 introduces a preferred image read to solve this problem: Each ASM instance can be read from a copy of the local extent, whether it is primary extent or copied
Extent.
Before Oracle 11.1, the best practice for ASM installation and management options is to install ASM separately, which provides the benefits of separately upgrading cluster software and ASM. For example, the cluster software and ASM can be upgraded to 11.1.0.7, while the database is still in the original version. In this best practice, there are three standard Oracle Installation directories: Cluster software, ASM, and database, if needed, ASM 11.1 can be installed in a different operating system user than RDBMS installation, oracle explains that the role independence between databases and storage management is a common practice for many sites. You can use SQL * Plus, Enterprise Manager (dbconsole), or dbca to manage ASM. In Oracle 11g Release 2, ASM is now part of grid infrastructure, whether in a single instance or RAC environment. A new configuration assistant asmca accepts and extends the features provided in dbca 11.1. ASM can no longer be started outside RDBMS Oracle Home. Asmca adds
Support for new ASM features of cluster file system. The introduction of a new superuser role named sysasm makes role separation possible, just like sysdba after Oracle 9i. You can bind sysasm permissions to roles different from sysoper and sysdba users.