APScheduler is a Python scheduled task framework based on Quartz. It implements all the functions of Quartz and is very convenient to use. Provides tasks based on the date, fixed time interval, and crontab type, and can persist tasks. Based on these functions, we can easily implement a python scheduled task system. Writing python is much more comfortable than writing java.
The installation process is simple and can be based on easy_install and source code.
[Plain]
Easy_install apscheduler
Or download the source code and run the following command:
[Plain]
Python setup. py install
APScheduler is an in-process scheduler that can regularly trigger specific functions and access all the variables and functions of the application. In web applications, it is very convenient to implement scheduled tasks through APScheduler. The following is an example:
[Python]
From apscheduler. schedimport import scheduler
Schedudler = schedonic (daemonic = False)
@ Schedudler. cron_schedule (second = '*', day_of_week = '0-4 ', hour = '9-12,13-15 ')
Def quote_send_sh_job ():
Print 'a simple cron job start at', datetime. datetime. now ()
Schedudler. start ()
The cron job is defined by the decorator above. You can use the scheduler. add_cron_job function to add it. It is more convenient to use the decorator. The daemonic parameter is input in the schedonic constructor to indicate that the execution thread is not daemon. In Schduler's document, we recommend that you use a non-daemonic thread:
[Plain]
Jobs are always executed in non-daemonic threads.
For specific cron job configurations, see the doc, which is basically the same as that of Quartz.
When adding a job, there is also an important parameter max_instances, which specifies the number of concurrent instances of a job. The default value is 1. By default, if a job is to be executed, but the previous instance of the job has not been executed, the latter job will fail. You can use this parameter to change this situation.
Apschedstore provides jobstore for storing job execution information. By default, RAMJobStore is used, and SQLAlchemyJobStore, ShelveJobStore, and MongoDBJobStore are provided. Apschedstore allows you to use multiple jobstores at the same time, which are distinguished by aliases (alias). When adding a job, you must specify the specific jobstore alias. Otherwise, the alias is the default jobstore, that is, RAMJobStore. The following uses MongoDBJobStore as an example.
[Python]
Import pymongo
From apscheduler. schedimport import scheduler
From apscheduler. jobstores. mongodb_store import MongoDBJobStore
Import time
Sched = schedonic (daemonic = False)
Mongo = pymongo. Connection (host = '127. 0.0.1 ', port = 127)
Store = MongoDBJobStore (connection = mongo)
Sched. add_jobstore (store, 'mongo') <span style = "white-space: pre"> </span> # the alias is mongo.
@ Sched. cron_schedule (second = '*', day_of_week = '0-4 ', hour = '9-12,13-15', jobstore = 'mongo ') <span style = "white-space: pre"> </span> # Add a job to the jobstore alias for mongo.
Def job ():
Print 'a job'
Time. sleep (1)
Sched. start ()
Note that start must be called after the job action is added; otherwise, an error is thrown. By default, the job information is saved in the jobs table in the apscheduler database:
[Plain] www.2cto.com
> Db.jobs. findOne ()
{
"_ Id": ObjectId ("502202d1443c1557fa8b8d66 "),
"Runs": 20,
"Name": "job ",
"Misfire_grace_time": 1,
"Coalesce": true,
"Args": BinData (0, "gAJdcQEu "),
"Next_run_time": ISODate ("2012-08-08T14: 10: 46Z "),
"Max_instances": 1,
"Max_runs": null,
"Trigger": BinData (0, "triggers = "),
"Func_ref": "_ main __: job ",
"Kwargs": BinData (0, "gAJ9cQEu ")
}
The above is the specific information stored.
By chosen0ne