ORACLE ASM Daily Management

Source: Internet
Author: User
Tags aliases

ASM Overview
Automatic Storage Management (ASM) is a great new feature in Oracle database 10g that provides services such as file systems, logical volume managers, and software raid in a platform-agnostic manner. ASM can stripe and mirror disks, enabling the addition or removal of disks as the database is loaded, and automatic balancing of I/O to remove "hotspots".
Files in ASM can be created and named either automatically by the database (by using the Oracle managed Files feature) or manually by DBAs. Because the operating system cannot access files stored in ASM, the only way to perform backup and restore operations on databases that use ASM files is through Rman.

ASM is implemented as a separate Oracle instance, and it can only be accessed by other databases at run time. On Linux, ASM is only available if you are running the OCSSD service (installed by default by the Oracle Universal Installer). ASM for most systems, just 64MB of memory.
Advantages of ASM
1. ASM is a cross-platform, the mainstream hardware platform can be used, the same management method.
2. The data is distributed evenly across all disks in the disk group, enabling file-level stripe and improving the performance of read and write data.
3. Provides multiple levels of redundancy to ensure data security.
4. Can support Online disk replacement. Automatically redistribute data after adding or removing disks, so there is no problem with fragmentation
ASM Related Concepts
ASM Disk Group
ASM Storage Management In addition to ASM instances, the largest component is ASM disk groups. An ASM disk group consists of many ASM disks. A disk group can hold multiple data files, one data file can only be in one disk group, not across the disk group. Multiple databases can share the same or multiple disk groups.
ASM Disk
ASM disks can contain multiple files, and multiple files can be spread over multiple disks, so disks and files are many-to-many relationships. An ASM disk is divided into multiple AU (allocation unit), each AU size is 1M, an Oracle data block must be placed in an AU, not across multiple AU, an AU is composed of multiple physical disk blocks, AU is the ASM expansion and contraction of the smallest unit ( A Windows system default system block is 4K)
ASM Fault Group
The Fault group (Failgroup), in fact, is a logical combination of ASM disk, which is the corresponding representation of the disk image in the disk group. If you do not specify which ASM disk belongs to which Failgroup, then you understand that each ASM disk is a failgroup. There are 3 ways to mirror a disk group: External redundancy, normal redundancy, high redundancy. These three kinds of mirrors, for the fault group, if there are 2 fault groups, is normal mode mirroring, If there are 3 fault groups, this is the high-mode image.
External redundancy: Do not provide mirroring functionality in ASM disks and can be set if there is hardware redundancy.
Normal redundancy: Provides dual mirroring capability, and a copy of the AU is present for each AU in the file.
High redundancy: Provides a triple mirroring feature, with two copies of the AU for each au in the file.
ASM mirroring rule: au (called primary AU) is not placed in the same fault group as his mirror copy.

ASM image for the AU level, such as a file has 6 AU, the disk group defines two mirrors, assuming that p1-p6 represents the main AU, m1-m6 for the Mirror AU, then failuer group1 3 disks (assuming 3), storage may be (P1,M6), (P2,M5 ), (P3,M4), while the 3 disk storage in Failuer group2 may be (M1,P4), (M2,P5), (M3,P6) respectively.

sql> Create DiskGroup Test normal redundancy disk ' ORCL:LUN4 ' name LUN4, ' orcl:lun3 ' name lun3failgroup fg1 disk ' OR   Cl:lun1 ' name LUN1, ' orcl:lun2 ' name LUN2; DiskGroup created. Sql> select Group_number,disk_number,name,failgroup,create_date,path from V$asm_disk; Group_number disk_number NAME failgroup create_date PATH---------------------------------                                                           ---------------------------------------------------------------------------0 0                                                           /DEV/RAW/RAW2 0 1           /DEV/RAW/RAW1 2 2 LUN1 FG1 07-dec-12 orcl:lun1                 2 3 LUN2 FG1 07-dec-12 orcl:lun2 2 1 LUN3     LUN3 07-dec-12 orcl:lun3 2 0 LUN4 LUN4            07-dec-12 orcl:lun4 1 0 VOL1 VOL1 26-nov-12 OR Cl:vol1

As you can see, the LUN1 and LUN2 within the disk group test belong to the fault group FG1,LUN3 and LUN4 respectively to the fault group LUN3, LUN4
Note: For a disk group that has been created, you cannot change its redundancy level, and if you want to change it, you need to delete the disk group and then recreate

The fault group is mainly to enhance the position of ASM extend mirror. A piece of data in a fault group is unique, the image is in the area, so do not pretend that a disk and a disk is the mirror relationship data is random, as far as possible spread to a fault group in each fault group, no matter how many disks, it can be regarded as a large logical disk For normal redundancy: Only one fault group is allowed to be unavailable, otherwise the data is lost, for high only two fault groups are allowed to be unavailable at the same time, otherwise the data is lost. For example, 4 ASM disk, which makes 1 nomal redundant DG, is possible on the remaining 3 ASM disk for any ASM extend (ASM is redundant in extend). If we make 2 Failgroup (Gp1:disk1 Disk2, GP2 Disk3 DISK4) then for any disk1,disk2 on extend can only mirror to DISK3 or 4, and vice versa. So for normal redundancy, if you want to be able to tolerate a few disks are not available at the same time, then you can define these disks as a fault group, it is OK. For normal redundancy, all extend of a fault group have and only one copy of the mirror in the other fault group. For high redundancy, all extend of a fault group, with only two copies of Mirror, are distributed in different fault groups. Therefore, the same fault group can be tolerated at the same time. Oracle is no matter how many failure group you create, if not explicitly specified, Oracle helps you create a failure group for each ASM disk, on top of which you specify external redundancy , Oracle chooses 1 failure group to store a data in random (the so-called random or Own algorithm), and if it is normal, Oracle randomly chooses 2 failure group to store one data, and if it is high, it saves 3 copies.

The

manage ASM Disk Group
ASM Disk Group splits each disk into multiple cells of size 1M, which is called the allocation Unit (AU). For the data files in the disk group, ASM splits the data files into blocks of size 1M (chunk) and distributes them evenly across all disks, called coarse striping (coarse stripe);  for
Redo in the database Log and Controlfile, because the file is small and requires a faster access speed, ASM divides them into 128KB chunk, which is called fine-grained striping (fine-grained stripe), an AU (1M) will put more than 128KB of Chunk,   This time I/O is split into multiple smaller I/O and is done in parallel. The size and speed of the disks in the
ASM Disk Group group should be the same, the number of ASM disk groups should not be too large, 2 is enough, one for storing data, one for Flash recovery area. In addition, as far as possible, the data area and the flash back area using different physical channels, as far as possible to mount all the required disk, as far as possible to the data area and the Flash zone to use a different physical channel. It is recommended to use a disk with a similar disk size for performance. Assuming that two fault groups fg1,fg2 each use a single disk, the disk in the FG1 is preferably the same size as the disk within the FG2, otherwise the minimum disk space will be used as a space for use.
We cannot delete the disk group directly, and when the last disk in the fault group is deleted, the fault group is deleted. ASM supports hot-swappable disks, and when we add a disk to a disk group, ASM automatically pulls a portion of AU from each disk in the disk group, writes the newly added disk, makes all the disks contain roughly the same data, and when we remove the disk from the disk group, the AU in the same deleted disk is evenly distributed to the other disks. This process is called rebalancing (rebalance). The Force keyword is not used to drop a disk operation, and all data on the disk is Rebalance after it is finished. The disk is offline and the disk head status is former
Rebalance:  reblance process automatically, We need to set the value of the asm_power_limite to adjust the speed.
All in all, any change in storage nature (disk additions, deletions, failures) will result in rebalance, which is automatically done by ASM, without manual intervention, and typically locks a disk area at a time period.

ASM does not have data dictionary information, ASM is the physical disk header record metadata, describes each disk belongs to which disk group and fault group information. Each of the ASM disks is self-describing.
operations related to ASM instances
SQL to check disk and disk group information:

Col State format a10col name format a15col failgroup format A20  Col header_status for A12set line 200select   group_ Number, state,redundancy,total_mb,free_mb,name,failgroup,header_status from V$asm_disk;select  GROUP_NUMBER, name,state,type,total_mb,free_mb,unbalanced from  V$asm_diskgroup;


Create a new DiskGroup

CREATE diskgroup diskgroup_name              [{High | NORMAL | EXTERNAL} redundancy]              [failgroup failgroup_name]              DISK [name Disk_name] [SIZE size_clause] [Force | Noforce] ...; Create DiskGroup dgtest normal redundancyfailgroup DATA1 disk '/dev/oracleasm/disks/vol5 ' name data1failgroup DATA2 disk ' /dev/oracleasm/disks/vol6 ' name DATA2;


When using Asmlib, you can use ORCL: instead of/dev/oracleasm/disks/such as, ORCL:VOL5
Delete DiskGroup

Drop DiskGroup <diskgroup_name>  [including contents] [Force]; drop DiskGroup DATA including contents;

The default clause is excluding contents, which will give an error when data is included in the disk group. Need to add
Including CONTENTS: Removes all files from the disk group.

Force: Clears information about the disk header. In a clustered environment, the statement fails if the operating disk group is in use or is mounted on another node.
Note: For multi-node diskgroup, only one ASM instance can be mounted before it can be Dorp, and the other nodes must be dismount.
Manual Mount command

ALTER DiskGroup all dismount; ALTER DiskGroup all MOUNT; ALTER DiskGroup <diskgroup_name> Dismount; ALTER DiskGroup <diskgroup_name> MOUNT;

Disk member Management
Add Disk to DiskGroup

Alter DiskGroup DATA add disk '/dev/oracleasm/vol5 ' name VOL5, '/dev/oracleasm/vol6 ' name VOL6;

Delete disk from DiskGroup

Alter DiskGroup DATA drop disk VOL5;


Cancel the deletion of disk's command, only when the above command does not complete the time effective

ALTER diskgroup DATA Undrop DISKS;


Add a member to each fault group of DG2

Alter DiskGroup dg2add failgroup FG1 disk '/dev/oracleasm/disks/vol7 ' Add failgroup FG2 disk '/dev/oracleasm/disks/vol8 ' Add failgroup FG3 disk '/dev/oracleasm/disks/vol9 ';


data File Aliases
Take aliases

Alter DiskGroup <diskgroup_name> add alias <alias_name> for ' <asm_file> ';   ALTER diskgroup disk_group_1 ADD ALIAS ' +disk_group_1/my_dir/my_file.dbf ' for ' +disk_group_1/mydb/datafile/my_ ts.342.3 ';


Note: Only ASM files created with OMF can be used for aliases (11g not tested) in 10g, and the diskgroup of aliases and original filenames must be identical, as in the example above +disk_group_1
Renaming aliases

ALTER diskgroup disk_group_1 RENAME ALIAS ' +disk_group_1/my_dir/my_file.dbf ' to  ' +disk_group_1/my_dir/my_ FILE2.DBF ';


Remove Alias

ALTER diskgroup disk_group_1 DELETE ALIAS ' +disk_group_1/my_dir/my_file.dbf ';


Delete a data file using an alias

ALTER diskgroup disk_group_1 DROP FILE ' +disk_group_1/my_dir/my_file.dbf ';


Using the full delete data file

ALTER diskgroup disk_group_1 DROP FILE ' +disk_group_1/mydb/datafile/my_ts.342.3 ';

viewing alias information

SELECT * from V$asm_alias;

Manual rebalance

Alter DiskGroup DG2 rebalance power 3 wait;

Add a directory to a disk group

Add a directory to a disk group

Alter DiskGroup DG2 add directory ' +dg2/datafile ';

Note You must ensure that all levels of the directory exist, or you will get an error ORA-15173

Sql> alter DiskGroup t add directory ' +T/CZMMIAO/SS/S '; alter diskgroup t add directory ' +t/czmmiao/ss/s ' *error at line 1:ora-15032:not All Alterations performedora-15173:entry ' SS ' does not exist in directory ' Czmmiao '


Error, create layers by layer

sql>  alter diskgroup t add directory ' +T/CZMMIAO/SS ';D iskgroup altered. sql>  alter diskgroup t add directory ' +t/czmmiao/ss/s ';D iskgroup altered.

Renaming a directory
sql>  alter diskgroup t rename directory ' +t/czmmiao/ss ' to ' +t/czmmiao/sa ';D iskgroup altered.

Delete Directory

Alter diskgroup DG2 drop directory ' +dg2/dtfile ';

table space is not created using OMF

Create tablespace iotest datafile ' +data/iotest.dbf ' size 100m;

Querying the mapping relationship of ASM disks and system partitions

#/etc/init.d/oracleasm querydisk-p Vol1disk "VOL1" is a valid ASM disk/dev/sdb6:label= "VOL1" type= "Oracleasm"

ASM's Disk group Dynamic rebalancing

An important feature of ASM is the ability to perform online disk reconfiguration and dynamic equalization. When we add and remove disk data to existing disk groups, ASM is able to adjust the data distribution to achieve IO equalization, which also reduces the occurrence of hotspot blocks. This is done by using indexing techniques to distribute the allocation units across the available disks, and ASM does not have to re-stripe all of the data, but only by moving a certain amount of data at the appropriate scales based on the added or deleted storage to redistribute the files evenly across the disks in the disk group and maintain the I/O between those disks Load Balancing. The operation can be done automatically by ASM and can be controlled artificially. If the speed of the balance is not specified in the disk operation, ASM uses the corresponding balance rate according to the value of the Asm_power_limit. The Asm_power_limit value range is 0~11, the higher the value, the faster the balance speed, the greater the additional load on the system. A setting of 0 indicates a pending rebalance operation with a default value of 1. The value of the balance rate depends on the hardware and system load of the system, such as the ability to increase the disk without data balance during busy periods, and to balance the data when it is not busy. If you need to add or remove more than one disk, you can balance the data after you complete the disk operation, or operate multiple disks at the same time. This allows data balancing to be done at once, avoiding the necessary movement of the data department. To complete the data balance, Oracle introduced a new background process, Rbal, to accomplish this task.

The operation of data balancing is as follows

Sql> alter DiskGroup ORADG add disk ' orcl:vol6 ' rebalance power 11; Sql> Show parameter Powername                                 TYPE                   VALUE-------------------------------------------------------------- ------------asm_power_limit                      integer               1sql>alter diskgroup dgroupb REBALANCE Power 5;

related views of ASM disks
V$asm_disk (_stat)--view disk and its status information
V$asm_diskgroup (_stat)--View disk groups and their status information
V$asm_operation--View the operation information of the current disk
V$asm_client--Returns the client instance information for the current connection
V$asm_file--Return information about the ASM file
V$asm_template--Returns information about the ASM file sample
V$asm_alias--Returns the alias information for ASM files

common failures of ASM
1. An error occurred while creating the disk to view the ASM log
Tail-f/var/log/oracleasm
2. ORA-29701 error when starting ASM instance
Ora-29701:unable to connect to Cluster Manager
The first time you need to enable CSS services, use the root account, run

$ORACLE _home/bin/localconfig Add
If the next time you launch the instance, you still encounter the following error:
Ora-29701:unable to connect to Cluster Manager
Then check the/etc/inittab file to see if there is a line below
H1:35:RESPAWN:/ETC/INIT.D/INIT.CSSD Run >/dev/null 2>&1 </dev/null
If not, please uncomment it if it is commented (Root account).
You can also use the root account to perform/u01/oracle/10g/bin/localconfig reset to resolve

If you are holding for a long time, you can do the following

$ORACLE _home/bin/localconfig delete$oracle_home/root.sh    

3. Disk Search Path Issues

sql> Create DiskGroup DG1 normal redundancy disk ' orcl:vol1 ', ' orcl:vol2 '; create DiskGroup DG1 normal redundancy disk ' O Rcl:vol1 ', ' orcl:vol2 ' *error at line 1:ora-15018:diskgroup cannot be createdora-15031:disk specification ' ORCL:VOL2 ' mat Ches no Disksora-15031:disk specification ' orcl:vol1 ' matches no disks

After you create the disk with Oraclasm, the missing capital adds the disk map you just created under the/dev/oracleasm/disks directory, modifies the asm_diskstring after modifying the path, and then creates it again.

Alter system set asm_diskstring= '/dev/oracleasm/disks/vol* '

Note: After the ASM instance has been configured and the ASM Disk group has been created, it must also be guaranteed to be registered with listener before it can be used in the DB instance, otherwise you will need to register the ASM instance manually:

Sql>alter system Register;

ORACLE ASM Daily Management

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.