How to Control Oracle RAC for parallel operations

Source: Internet
Author: User
A major advantage of RAC is that it can perform parallel computing across nodes. How can we control parallel computing? This is the content to be discussed in this article.

To set cross-node parallelism properly, you must first set some parameters:
Instance_groups: this parameter mainly sets whether the node instance belongs to an instance group. You can set multiple instance group names for each node. instance group names are separated by commas.
The value set by parallel_instance_group is the value set in instance_groups, indicating which instance groups can be crossed by parallel operations on this node.

Alter system set instance_groups = 'DW ', 'dw1', 'dw123', 'dw124', 'dw134' scope = spfile SID = 'dwrac1 ';
Alter system set instance_groups = 'DW ', 'dw2', 'dw123', 'dw124', 'dw234' scope = spfile SID = 'dwrac2 ';
Alter system set instance_groups = 'DW ', 'dw3', 'dw123', 'dw134', 'dw234' scope = spfile SID = 'dwrac3 ';
Alter system set instance_groups = 'DW ', 'dw4', 'dw124', 'dw134', 'dw234' scope = spfile SID = 'dwrac4 ';

The preceding parameters define five instance groups for four nodes. Some groups contain multiple nodes, and some groups only contain one node.

1. Execute across all nodes

The test is performed in two cases: instance_groups contains all nodes and instance_groups only contains one node.

1. instance_groups includes all nodes

1). Node 2 parameter settings
Admin @ dwrac2> show parameter instance_group

Name type value
-----------------------------------------------------------------------------
Instance_groups string DW, dw2, dw123, dw124, dw234
Parallel_instance_group string

Only instance_groups is defined, and parallel_instance_group is not set.
We have previously set that the DW instance_group contains all nodes.

2) issue a query in node 2
Admin @ dwrac2> select/* + parallel (A, 128) */count (1) From testuser. big_table;

3). open another window to query parallel processes
SQL> select inst_id, count (1) From GV $ px_session P where (P. inst_id, P. sid) in (select S. inst_id, S. sid from GV $ session s where S. username = 'admin' and module like 'SQL * %') group by inst_id;

Inst_id count (1)
--------------------
1 32
2 33
4 32
3 32

As you can see, parallel sessions (parallel slaves) are evenly allocated to four nodes. Node 2 has one more process because it is the initiator of parallel query. The additional session is the initiator session, that is, the parallel query coordinator, and the rest are the parallel subordinate processes.
All nodes can be allocated to four nodes because the four nodes belong to the DW instance group and no parallel_instance_group is set.

2. instance_groups only contains one node

1). Modify parameters
Admin @ dwrac2> alter system set instance_groups = 'dw2' scope = spfile SID = 'dwrac2 ';

Restart

Admin @ dwrac2> show parameter instance_group

Name type value
-----------------------------------------------------------------------------
Instance_groups string dw2
Parallel_instance_group string

Parallel_instance_group is not set, and instance_group = dw2 only contains Node 2.

2). Issue parallel queries
Admin @ dwrac2> select/* + parallel (A, 128) */count (1) From testuser. big_table;

3). open another window to query parallel processes

SQL> select inst_id, count (1) From GV $ px_session P where (P. inst_id, P. sid) in (select S. inst_id, S. sid from GV $ session s where S. username = 'admin' and module like 'SQL * %') group by inst_id;

Inst_id count (1)
--------------------
1 32
2 33
4 32
3 32

As you can see, even if instance_groups = 'dw2' only appears in node 2, but if parallel_instance_group is not set, parallelism can still span all nodes.

2. Cross-node execution

This test is divided into two types: the node that sends the command belongs to a member of parallel_instance_group and the node that sends the command does not belong to parallel_instance_group.
1. the node sending the command belongs to a member of parallel_instance_group.
1) Modify parameters in node 2

Admin @ dwrac2> show parameter instance_group

Name type value
-----------------------------------------------------------------------------
Instance_groups string DW, dw2, dw123, dw124, dw234
Parallel_instance_group string

Admin @ dwrac2> alter session set parallel_instance_group = 'dw2 ';

Session altered.

-- This setting will force parallel operations to only be executed in the node of the Instance group dw2.
-- Note: parallel_instance_group is case sensitive. If you use alter session set parallel_instance_group = dw2, no parallel process will be initiated, because no single quotation marks are added, in the actual upload background, dw2 is converted to the write-in dw2, which does not belong to any instance group.

2). issue the same query on node 2
Admin @ dwrac2> select/* + parallel (A, 128) */count (1) From testuser. big_table;

3) Initiate another session to query the parallel situation

SQL> select inst_id, count (1) From GV $ px_session P where (P. inst_id, P. sid) in (select S. inst_id, S. sid from GV $ session s where S. username = 'admin' and module like 'SQL * %') group by inst_id;

Inst_id count (1)
--------------------
2 129

As you can see, because only Node 2 belongs to the instance group dw2, all parallel sessions are distributed on node 2.

2. the node that sends the command does not belong to parallel_instance_group.

1) Modify parameters in node 2
Admin @ dwrac2> show parameter instance_group

Name type value
-----------------------------------------------------------------------------
Instance_groups string DW, dw2, dw123, dw124, dw234
Parallel_instance_group string

Admin @ dwrac2> alter session set parallel_instance_group = 'dw1 ';

Session altered.

According to our previous settings, only Node 1 belongs to the instance group dw1.

2) issue a query at Node 2

Admin @ dwrac2> select/* + parallel (A, 128) */count (1) From testuser. big_table;

3) Start a new session to query parallel process information
SQL> select inst_id, count (1) From GV $ px_session P where (P. inst_id, P. sid) in (select S. inst_id, S. sid from GV $ session s where S. username = 'admin' and module like 'SQL * %') group by inst_id;

Inst_id count (1)
--------------------
1 128
2 1

As you can see, in addition to the parallel query Coordinator (that is, the session that sends the query command), all parallel processes run on node 1.
This also shows that even if the node that sends the parallel query command does not belong to parallel_instance_group, it can still call node resources belonging to parallel_instance_group.

Admin @ dwrac2> alter session set parallel_instance_group = 'dw134 ';

Session altered.

Elapsed: 00:00:00. 00
Admin @ dwrac2> select/* + parallel (A, 128) */count (1) From testuser. big_table;

SQL> select inst_id, count (1) From GV $ px_session P where (P. inst_id, P. sid) in (select S. inst_id, S. sid from GV $ session s where S. username = 'admin' and module like 'SQL * %') group by inst_id;

Inst_id count (1)
--------------------
1 42
2 1
4 42
3 44

Summary:
1. RAC can use instance_groups and parallel_instance_group to flexibly control how to perform parallel operations across nodes.
2. parallel_instance_group is case sensitive. If the set parallel_instance_groups value does not belong to any instance_groups value of the entire cluster, Oracle will only perform the operation in a serial manner and will not enable parallel operation.
3. If parallel_instance_group is not set, parallelism can span all nodes regardless of how instance_group is set.
4. If a node is configured with a valid parallel_instance_group, the parallel operation sent by the node can run on all nodes included in parallel_instance_groups, regardless of whether the node of parallel_instance_groups contains the node that issues the command.
That is to say, the running of parallel sessions on those nodes is only related to instance_groups and parallel_instance_groups, and has nothing to do with issuing commands on that node.
5. We recommend that you set the appropriate instance_groups, but do not set parallel_instance_groups at the system level. Set parallel_instance_groups at the session level based on the actual situation.
6. Examples and descriptions

Dwrac1.instance _ groups = 'DW ', 'dw1', 'dw123', 'dw124', 'dw134'
Dwrac2.instance _ groups = 'DW ', 'dw2', 'dw123', 'dw124', 'dw234'
Dwrac3.instance _ groups = 'DW ', 'dw3', 'dw123', 'dw134', 'dw234'
Dwrac4.instance _ groups = 'DW ', 'dw4', 'dw124', 'dw134', 'dw234'

Dwrac1.parallel _ instance_groups = ''-- the Session of the parallel computing request initiated by node 1 can be executed across all nodes.
Dwrac1.parallel _ instance_groups = 'DW '-- the Session of the parallel computing request initiated by node 1 can be executed across all nodes
Dwrac1.instance _ groups = 'dw1' -- the Session of the parallel computing request initiated by node 1 can only be executed on node 1
Dwrac1.instance _ groups = 'dw2' -- the Session of the parallel computing request initiated by node 1 can only be executed on node 2
Dwrac2.instance _ groups = 'dw134' ---- the Session of the parallel computing request initiated by node 2 can only be executed on node 1/3/4
Dwrac1.instance _ groups = 'other' -- Parallel is not enabled
Latest reply

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.