Some basic concepts of Oracle 11g RAC (iii)

Source: Internet
Author: User

The grid infrastructure shared components grid infrastructure uses two types of shared devices to manage cluster resources and nodes: OCR (Oracle Cluster Registry) and voting disks. Oracle 11.2 introduces a new file called Oracle local Registry (OLR), which is only allowed to be stored locally. OCR and OLR

OCR is shared across all nodes and contains all the information about the cluster resources and the operation licenses required by the Grid infrastructure. For sharing, OCR needs to be stored on bare devices, shared block devices, clustered file systems like OCFS2, or ASM. In grid infrastructure, only systems that are upgraded are supported for non-ASM managed OCR, and if it is a new installation, you must use a clustered file system or ASM. In RAC10 and 11.1, OCR can have 1 mirrors, and 11.2 increases to 5 copies.

The Grid infrastructure automatically backs up the OCR every 4 hours and retains some backups for recovery. An option is introduced in RAC11.1 to manually back up cluster registy, which performs additional integrity checks as the root user runs the diagnostics. Clusterware11.1 simplifies the deployment of cluster registry on shared block devices through the Oracle Universal Installer, before requiring a manual process of moving OCR to a block device. When you use a bare device in the RAC11.1 in red Hat 4 or SLES10, you need to manually configure the bare device with Udev. This configuration process is explained in Oracle support, where single-path and multi-path connections share storage in different ways.

In some rare cases, OCR may be destroyed, and it needs to be restored from a backup. Depending on the severity of the damage, it may be sufficient to restore from one image, or it may need to be restored from a backup. OCR can only be managed and maintained through the tools provided by Oracle, and Oracle will not support configuration issues if the content in OCR is dumped and modified directly.

Another cluster configuration file, called OLR, was introduced in Oracle 11.2. This file has its own separate copy in the grid infrastructure installation directory for each node. OLR stores an important security environment that Ohas used in the early stages of cluster startup. OLR and Grid Plug and Play profiles are required for locating voting disks, and if they are stored in ASM, Discovery related strings in Gpnp's profile are used by the cluster synchronization process to find them. Later in the cluster software startup, the CSSD process launches an ASM instance to connect the OCR file. However, their paths are stored in the/etc/ocr.loc file, as in the RAC11.1. Of course, if voting files and OCR are stored on a shared clustered file system, ASM instances do not need to be started, unless other resources need to use ASM.

Configure voting Disks If a node fails to respond to heartbeat requests from other nodes for a specified period of time (Countdown-threshold), the node will be kicked out of the cluster. Similar to OCR, voting disk and its mirrors must be stored on shared storage (11.1 supports 3 voting disks,11.2 to 15). Like OCR, the Grid infrastructure only supports bare devices on upgraded systems, and the newly installed only supports clustered file systems or ASM. Block devices and bare devices will no longer be supported in Oracle12. Oracle strongly recommends using at least 3 voting disks in a different location. When using ASM to manage voting disks, you need to be aware of the redundancy levels of disk groups and failure groups. Note that all copies of voting disk are in one disk group, and you cannot distribute voting disks across multiple disk groups. When using an externally redundant disk group, you can only have 1 voting disk. Using the normal redundancy redundancy level requires at least 3 fault groups to store 3 voting Disks,high redundancy redundancy level is more flexible, it supports up to 5 voting disks.

Using ASM

ASM was introduced in oracle10.1, a logical volume manager that supports clustering on the physical database structure of Oracle. Files that can be stored in ASM include control files, database files, and online redo logs (as well as spfile and archive logs). Until 11g R2, you cannot store any type of operating system files.

The file types supported by ASM are not the same for each version. A list of 10.2 and 11.2 is posted below for reference comparison:

10.2

11.2:

ASM is based on the concept of ASM disk, Failure groups, ASM disk groups.

Several ASM disk forms an ASM disk group. Like LVM, an ASM disk is equivalent to a physical volume in LVM. Unlike LVM, several ASM disks that share a common point of failure (such as a disk controller) can form a failure group. An ASM disk group can be used to store physical database structures: Data files, control files, redo logs, and other file types. Compared to the logical Volume Manager (LVM) in Linux, Disk group did not create logical volumes above, and instead, all the files in the database were logically grouped in a directory on disk group. There is no need for a file system in ASM, which is why ASM has a performance advantage over traditional LVM.

Grid Infrastructure introduces the ASM cluster file System (ACFS), eliminating the limitations of storing common-use files. ASM uses the Stripe-and-mirror-everything method to provide the best performance.

The use of ASM and ACFS is not limited by clustering, and single-instance Oracle can also benefit from it. Technically, Oracle ASM is applied as a special Oracle instance, with its own SGA, but without a continuous dictionary. In RAC, each cluster node has and has only one individual ASM instance. When started, each instance detects ASM disk group resources in the grid infrastructure through initialization parameters in the cluster software. Each instance will mount these disk groups. By giving the correct permissions (ASM11.2 introduced an Access control List (ACLs)) The database can access their own data files. Using ASM requires the application of OMF, which means different database file management methods. Initialization parameters in an RDBMS instance, such as Db_create_file_dest and Db_create_online_dest_n, and Db_recovery_file_dest, specify which disk group the related files are stored in. When a new file needs to be created, OMF is created in the following format: +diskgroupname/dbuniquename/file_type/file_type_tag.file.incarnation give an example: +DATA/ORADB /datafile/users.293.699134381

ASM allows you to perform many online operations, and in ASM11.1 and later versions, you can upgrade by scrolling (rolling fashion) to minimize the impact on the database.

ASM operates at bare partition level and avoids the use of LVM2 logical volumes in order to reduce the cost of the product system. ASM is also supported on NFS. However, instead of directly mounting the file manager given the directory, you need to use the DD tool to create a 0 fill file as an ASM volume. When using NFS, you need to negotiate with your suppliers to provide them with documentation of best practices.

Environments with special needs, such as massive databases larger than 10TB of data, can benefit from the size of the customizable extents (extent) at the disk group level. A general-purpose storage optimization technique includes using only the disk edge location to provide higher performance than using other locations. ASM's Intelligent data distribution allows administrators to define hotspot areas with higher speed and bandwidth. Frequently accessed files can be placed in these locations to improve performance. The hard drive manufacturer is about to launch a hard drive with a sector size of 4k, with increased storage density, faster and larger capacity. ASM is prepared for this by providing a property of the disk group called sector size, which can be set to 512 bytes or 4k.

In most installations, a typical workflow: storage Administrators provide storage for ASM disk on all nodes of the cluster; system administrators create partitions for these new block devices, long path configurations, and use Asmlib or Udev to mark these partitioned block devices as candidate disks ; After handing over to the database team, Oracle Administrators can configure ASM disk and ASM disks groups. These operations can be done online without restarting the server.

ASM disk. ASM disk is the basic constituent unit of ASM. When an ASM standby disk is added to a disk group, the metadata information is written to its head, allowing the ASM instance to recognize the disk and mount it to the disk group. In a storage array, disk failures often occur. Individual disks are used at high intensity, and they are normal for failure. In most cases, disk arrays can recover failed disk data by using mirrored disks or parity information based on the level of protection used. Disk failures in ASM do not occur frequently, because in most cases LUNs that are protected by a disk array are used. However, if a disk in a ASM-protected disk group fails, you need to urgently replace the failed disk to prevent it from being discarded. The introduction of a new parameter in ASM 11.1, called Disk repair time, allows an administrator to fix a brief disk failure without requiring a global tuning operation. When a disk is added or removed from a disk group, a realignment occurs and the members in the disk group are re-striped. Depending on the size of the ASM disk group, this adjustment can be time-consuming. If the administrator is lucky enough to get the failed ASM disk back to the disk group before the re-tuning operation occurs, the disk group will be able to return to the normal state very quickly, because only the log of the region with data changes (dirty regions) needs to be applied, and the whole new adjustment is not required. Depending on the use of the storage background, the LUN can be protected by the raid level of the array, or it can be a collection of unprotected storage (JBOD). *************************************************************************************************************** *********************************************************************************Asmlib and Udev Both Asmlib and Udev solved the problem of fixed device names. In Linux, the order in which devices are detected and enumerated is not fixed. This is not the same as in Solaris, for example, unless a disk is physically moved in the array, the device name (for example, C0T0D1P1) does not change. Reconfiguration of a storage array that does not have a long path can have a big problem with Linux: A device that was originally shown as/DEV/SDA in the operating system may be remapped to/DEV/SDG after a reboot, simply because the operating system detects that it is a little later than the last boot. The device name-based bare device mapping is doomed to fail. First look at the Udev solution. The World-wide-id (WWID) of a SCSI device does not change, making use of this point in udev to develop a rule that creates a mapping that defines the device/DEV/RAW/RAW1 always pointing to the SCSI ID of the LUN that is XXXX. The main problem with Udev is that its configuration is not intuitive and easy to use. Because Udev cannot replicate the configuration, the administrator needs to maintain the Udev configuration on each node in the cluster. (We can use the Udevinfo-q path-n/dev/sda1 to see the/dev/sda1 of the UDEV device name, the path under/sys) configured Multipath storage is not a problem, because another software layer (for example, Devicemapper-multipath package) or the vendor-specified software creates a logical device. Asmlib provides another way. The Asmlib tool can be downloaded free of charge in http://oss.oracle.com, which makes the management of ASM disks very simple. The Asmlib consists of 3 rpm packages: A kernel module, an actual asmlib, and a support tool. Before using a LUN as ASM disk, you can use the Asmlib tool to mark it by adding metadata information to the disk header, and then asmlib to identify the new LUN as a possible candidate for adding to the ASM Disk group. On reboot, Asmlib will scan the disk header for information to identify ASM disks, regardless of what the physical device name has become during the boot process. It guarantees the stability of the device name and is very low cost. Asmlib is a kernel module that allocates its own internal memory structure, which can be configured in single-path and multi-path configurations. *************************************************************************************************************** ********* asm Disk group  ASM Disk Group has three redundancy levels: external redundancy, general redundancy, and high redundancy when creating an externally redundant disk group, ASM allows the storage array to assume responsibility for data protection without any mirroring. It does a stripe with a default size of 1M between ASM disks in the disk group. Write errors can force ASM disks to be unloaded. This can have serious consequences because the extents on the disk do not have any available copies, and the entire disk group becomes unavailable. At the normal redundancy level, ASM will strip and mirror each of the extents, and when one of the extents is written to disk, there will be another disk area written to another failure group to provide redundancy. In ASM11.2, a single file can be used for strips and mirrors, and a bidirectional mirror is done by default. Normal redundancy can tolerate the failure of an ASM disk in a disk group. High redundancy provides a higher level of protection by default, providing stripe and mirroring, creating two additional copies of the primary extents, and tolerating the failure of two ASM disks in the disk group.  failure Groupfailure Group is a logical disk group, and when one of the components fails, the entire disk group is unavailable. For example, a disk belonging to a SCSI controller consists of a failure group, and if this controller fails, all disks are not available. In normal and high redundancy, ASM uses the failure group to store a mirrored copy of the data. If not explicitly configured, each ASM disk consists of its own failure group. The Normal redundancy disk group needs to consist of at least 2 failure groups, and a high redundancy disk group requires at least 3. Then, it is recommended to use a more fail group than this minimum value to provide additional data protection. ASM is read by default from the primary extent in an ASM disk group, and in a extended distance cluster, if primary extent on a Remote Storage array, performance issues can result. ASM 11.1 introduces a preferred mirror read to solve this problem: each ASM instance can be specified to be read from a copy of the local extent, whether it is primary extent or copied extent.

ASM installation and management options prior to Oracle 11.1, the best practice was to install ASM separately, which provides the benefits of a separate upgrade of the cluster software and ASM. For example, cluster software and ASM can be upgraded to 11.1.0.7, and the database remains in its original version. In this best practice, there are three standard Oracle installation directories: Cluster software, ASM, database if required, ASM 11.1 can be installed under a different operating system user than the installation RDBMS, which Oracle explains The role independence between database and storage management is a common practice for many sites.ASM can be managed through Sql*plus, Enterprise Manager (Dbconsole), or DBCA. In Oracle 11g Release 2, ASM is now part of the grid infrastructure, whether in a single-instance or RAC environment. A new configuration Assistant ASMCA accepts and expands the functionality provided in the 11.1 dbca. ASM can no longer be launched from outside the RDBMS Oracle home. ASMCA adds support for another ASM feature called the ASM Cluster File system. The introduction of a new super-user role called Sysasm makes the role separation possible, just like the Oracle 9i after the SYSDBA. You can bind sysasm permissions in roles that are different from those of Sysoper and SYSDBA users. reprint: http://blog.sina.com.cn/s/blog_5fe8502601016atp.html
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.