Event-based scheduling in ORACLE Scheduling (2) [developed by weber], oracleweber

Source: Internet
Author: User

Event-based scheduling in ORACLE Scheduling (2) [developed by weber], oracleweber

I. Review

Scheduling is divided into time-based scheduling and event-based scheduling.

For a brief review of the preceding information, see ORACLE time-based scheduling (1) [prepared by weber].

Ii. Knowledge supplement

1. queue: a data structure, like a pipe, where processes are inserted one by one and then delivered one by one. It is important to consider first-in-first-out.

2. Advanced queue:

A. Advanced queue management is a feature of Oracle databases. It provides message queue management functions. This is a very reliable, secure, and scalable message management system, because it uses the same database features as other Oracle-based applications.

B. A major advantage of advanced queue management is that it can be accessed through PL/SQL, Java, or C, in this way, you can import messages from a Java servlet into the queue and make the same messages in the PL/SQL stored procedure out of the queue.
C. Another advantage of advanced queue management is that you can use this software to spread messages between remote nodes through Oracle Net Services (SQL * Net), HTTP (S), and SMTP. Advanced queues can even be managed through message gateways and non-Oracle message management systems.
Integration of systems (such as IBM MQSeries.
D. Oracle Advanced Queue Management provides single-consumer and multi-consumer queues. A single consumer queue only targets a single receiver. A multi-consumer queue can be used by multiple recipients. When a message is put into a multi-consumer queue, the application programmer must explicitly specify these receivers in the message attributes, or establish a rule-based subscription process that determines the recipients of each message.

 

Iii. Event-based scheduling

Create a test table

conn hr/hrcreate table event_job_test(id number,createdatae date);alter table event_job_test add constraint pk_event_job_test primary key(id);create sequence seq_event_job_test;

Create a type:

create or replace type t_event_queue as object(object_owner varchar2(50),event_name varchar2(50));

CreateTeam listThe fields in this queue are the attributes of the t_event_queue type we just created.

conn /as sysdbagrant execute on dbms_aqadm to hr;conn hr/hrbegin dbms_aqadm.create_queue_table(queue_table=>'event_queue_tab',queue_payload_type=>'t_event_queue',multiple_consumers=>true);end;/

CreateQueueAnd associate the queue with the previously created team list.

begindbms_aqadm.create_queue(queue_name=>'event_queue',queue_table=>'event_queue_tab');end;/

Start queue

begindbms_aqadm.start_queue(queue_name=>'event_queue');end;/

Create an event-based task

conn /as sysdbaBEGINsys.dbms_scheduler.create_job(job_name => '"HR"."EVENT_BASE_JOB"',job_type => 'PLSQL_BLOCK',job_action => 'begin  insert into hr.event_job_test values(seq_event_job_test.nextval,sysdate);  commit;end;',event_condition => 'tab.user_data.object_owner=''HR'' andtab.user_data.event_name=''give_me_an_event''',queue_spec => 'HR.EVENT_QUEUE',start_date => systimestamp at time zone '+8:00',job_class => 'DEFAULT_JOB_CLASS',auto_drop => FALSE,enabled => TRUE);END;

Insert a message to a queue

Before insertion, query the table and find no data.

conn hr/hrselect * from event_job_test;

Insert a message to a queue

conn /as sysdbagrant execute on dbms_aq to hr;conn hr/hrdeclare    l_enqueue_options dbms_aq.enqueue_options_t;    l_message_properties dbms_aq.message_properties_t;    l_message_handle raw(16);    l_queue_msg t_event_queue;begin    l_queue_msg := t_event_queue('HR','give_me_an_event');    dbms_aq.enqueue(       queue_name=>'event_queue',       enqueue_options=>l_enqueue_options,       message_properties=>l_message_properties,       payload=>l_queue_msg,       msgid=>l_message_handle);    commit;end;select * from event_job_test;

Delete job:

begin  dbms_scheduler.drop_job(job_name => '"HR"."EVENT_BASE_JOB"', force => true);end;

Create event-based scheduling and load data

Create a test table

conn scott/tigercreate table t as select * from emp where 1=2;vi /u01/load.ctlload datainfile '/u01/data.txt'badfile '/u01/bad.emp'discardfile '/u01/discadr.emp'truncateinto table tfields terminated by ','trailing nullcols(EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)vi /u01/load.sh#!/bin/bashexport ORACLE_BASE=/u01/app/oracleexport ORACLE_HOME=$ORACLE_BASE/product/11.2/db_1export ORACLE_SID=orcl$ORACLE_HOME/bin/sqlldr scott/tiger control=/u01/load.ctl log=/u01/load.log

Save and exit

chmod +x /u01/load.sh

Dump data in emp to/u01/data.txt:

set trims onspool /u01/data.txtselect     EMPNO||','||    ENAME||','||    JOB||','||    MGR||','||    HIREDATE||','||    SAL||','||    COMM||','||    DEPTNO from emp;spool off

Create a type:

sqlplus scott/tigercreate or replace type t_event_queue as object(  object_owner    varchar2(10),  object_name     varchar2(20),  event_type      varchar2(20),  event_timestamp number(2));/

Create a queue list. The fields in this queue are the attributes of the t_event_queue type we just created.

conn /as sysdbagrant execute on dbms_aqadm to scott;conn scott/tigerbegin dbms_aqadm.create_queue_table(queue_table=>'event_queue_tab',queue_payload_type=>'t_event_queue',multiple_consumers=>true);end;/

Create a queue and associate it with the previously created queue list

begindbms_aqadm.create_queue(queue_name=>'event_queue',queue_table=>'event_queue_tab');end;/

Start queue

begindbms_aqadm.start_queue(queue_name=>'event_queue');end;/

Create an event-based task

Conn/as sysdbaBEGIN sys. dbms_scheduler.create_job (job_name => '"SYS ". "m_m_data_load" ', -- the owner must be sys job_type => 'executable', job_action =>'/u01/load. sh', event_condition => 'tab. user_data.object_owner = ''scott ''and tab. user_data.object_name = ''DATA. TXT ''and tab. user_data.event_type = ''file _ ARRIVAL ''and tab. user_data.event_timestamp <9 ', -- create a job. If the data files loaded in batches arrive at the file system before a.m., run the job queue_spec => 'Scott. EVENT_QUEUE ', start_date => systimestamp at time zone' + 8:00 ', job_class => 'default _ JOB_CLASS', auto_drop => FALSE, enabled => TRUE); END ;/

Insert a message to a queue

conn scott/tigerselect * from t;

Insert a message to a queue

conn /as sysdbagrant execute on dbms_aq to scott;conn scott/tigerdeclare  l_enqueue_options    dbms_aq.enqueue_options_t;  l_message_properties dbms_aq.message_properties_t;  l_message_handle     raw(16);  l_queue_msg          t_event_queue;begin  l_queue_msg := t_event_queue('SCOTT', 'DATA.TXT', 'FILE_ARRIVAL',8);  dbms_aq.enqueue(queue_name         => 'event_queue',                  enqueue_options    => l_enqueue_options,                  message_properties => l_message_properties,                  payload            => l_queue_msg,                  msgid              => l_message_handle);  commit;end;/select * from t;

Delete job:

conn /as sysdbabegin  dbms_scheduler.drop_job(job_name => '"SYS"."PERFORM_DATA_LOAD"', force => true);end;/


To sum up the notes for using the job Scheduling shell in oracle db:

1. Add # At the beginning of the shell script #! /Bin/bash and other specified shell types
2. All related environment variables must be explicitly specified in shell.
3. If you want to write files, you must use an absolute path.
4. Use the sys user to create a job


What are the default task scheduling in the ORACLE system? What are the purposes of each scheduling? How can I switch these scheduled tasks?

1. Task Scheduling plans are scheduled by crontab tasks in the operating system, such as Windows, Linux, and UNIX operating systems.
Job Scheduling of some databases, such as Oracle's job scheduling mechanism

Oracle dbwn writes dirty blocks according to the lruw chain, and records lrba In the checkpoint queue. When CKPT occurs, dbwn is scheduled to write the checkpoint.

1. dbwn writes data blocks from the buffer cache to the data file. The corresponding blocks are only removed from the lruw chain, oracle dbwn writes dirty Blocks Based on the lruw chain. The dbwn process writes data blocks based on the data block sequence on the ckeckpoint chain.
2. When a checkpoint occurs, dbwn will be scheduled to write dirty data blocks on the checkpoint Queue (checkpoint chain). Currently, checkpoints are divided into incremental check points and full check points, the full checkpoint is triggered when alter system checkpoint and the database is shut down consistently. All dirty data blocks on the checkpoint chain are written to the database. What does the incremental checkpoint do? The incremental checkpoint writes the lrba address of the first block in the checkpoint chain to the control file. lrba corresponds to the location of the log corresponding to the operation that the first data block in the checkpoint chain was modified for the first time, at the same time, some data blocks starting from the chain header on the checkpoint chain are written into data files (the data blocks on the checkpoint chain are arranged strictly in the order of modification time, if the same block is modified multiple times, there is only one block in the checkpoint chain. You can think that multiple modification records are on this data block, it's not like what you said about the checkpoint queue, but the data block when the block was modified for the first time. I don't quite understand what you mean here ), the dbwn process is required to write data blocks to data files (the Checkpoint Process triggers dbwn). What will lgwr do before dbwn writes dirty data? Lgwr writes the corresponding logs to redo. log first. That is, dbwn will trigger lgwr. oracle is like this. Before writing data, you must first write logs. It is safe to have log data.
3. What is lrba used? For example, if an incremental checkpoint occurs, record the lrba address to the control file, and trigger dbwn, remove five blocks from the checkpoint chain, after dbwn is removed, lgwr is triggered to write the logs corresponding to the five blocks to redo. log, there are logs, but at this time (dbwn has not written the five dirty data blocks into the data file) The database crashes and the data in the buffer cache is lost, next time you open the database. The smon process will check the end scn of the data file. If it is null, the database is not properly shut down (here you can look at the principle of instance recovery, so we will not say much). At this time, the instance will be restored, therefore, a recovery point will be located, and the lrba address is the recovery point. We need to re-construct the five log-writing blocks in the buffer cache. The lrba address serves this purpose, however, there are still many details about instance recovery. You can read the relevant books.

Hope to help you.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.