Build your own Dynamic Job scheduler based on Quartz. NET and schedule jobs using quartz.net

Source: Internet
Author: User

Build your own Dynamic Job scheduler based on Quartz. NET and schedule jobs using quartz.net

 

  In daily development, running scheduled tasks is basically a common requirement. It can be achieved through the windows Service + timer component, or by using a third-party framework, Quartz. NET is a good job scheduling component transplanted from Quartz of JAVA. However, when we write and deploy all the jobs, management becomes very troublesome, therefore, I am based on Quartz. NET, and a simple encapsulation to achieve dynamic job management.

First, dynamic job management includes the following core points:

 

How Quzrtz. NET can be used here is no longer explained, there are a lot of Baidu.

 

There are three core modules: Job, Trigger, and Schedule,

A Job is a Job. A Trigger is a Job execution policy (such as the execution time). Schedule loads the Job and Tigger.

Job and Tigger can be loaded to Schedule at will.

Next, we will explain the Implementation ideas.

  

First define a class library. The Class Library only contains one class, BaseJob, which has only one Run () method.

After that, every job we implement inherits from this class and implements the Run () method (each job acts as an independent class library, referencing this class library with only one class)

  

public abstract class BaseJob:MarshalByRefObject,IDisposable{        public abstract void Run();}

Next, we will establish our Job management core class library Job. Service nuget to install Quartz. NET.

Create JobImplement. cs to implement the IJob interface of Quartz. NET.

In this way, we can get the Dynamically Loaded Job information through the Job Scheduling container we write, and run the Job run method, to achieve dynamic scheduling (how to load jobs in the Job Scheduling container is explained later in the article)

JobRuntimeInfo is an object class defined by us. It contains BaseJob, AppDomain, and JobInfo information.

JobInfo is the basic job information that needs to be filled in when a job is uploaded to the job dynamic scheduling framework.

    

  

Public class JobImplement: IJob {public void Execute (IJobExecutionContext context) {try {long jobId = context. jobDetail. jobDataMap. getLong ("JobId"); // search from the job scheduling container. If yes, run var jobRuntimeInfo = JobPoolManager. instance. get (jobId); try {jobRuntimeInfo. job. tryRun ();} catch (Exception ex) {// write the log. The ConnectionFactory task fails to be called. getInstance <Provider. jobStateRepository> (). update (new Provider. tables. jobState () {JobId = jobId, RunState = (int) Provider. directiveType. stop, UpdateTime = DateTime. now}); Common. logging. logManager. getLogger (this. getType ()). error (ex. message, ex) ;}} catch (Exception ex) {Common. logging. logManager. getLogger (this. getType ()). error (ex. message, ex); // failed when calling, write log, Here error, system-level error, serious error }}}

 

JobRuntimeInfo

  

public class JobRuntimeInfo    {        public AppDomain AppDomain;        public BaseJob Job { get; set; }        public JobInfo JobModel { get; set; }    }

JobInfo

public class JobInfo    {        public long JobId { get; set; }        public string JobName { get; set; }public string TaskCron { get; set; }        public string Namespace { get; set; }        public string MainDllName { get; set; }        public string Remark { get; set; }        public string ZipFileName { get; set; }        public string Version { get; set; }        public DateTime? CreateTime { get; set; }    }

 

Next we will explain how this job is executed.

1. Package the Job class library as zip or rar on an upload page and upload it to the server. Then, enter the Job running information and add it to the database.

2. After the upload is complete, publish a broadcast message to all job scheduling frameworks.

3. The job scheduling framework receives broadcast messages, obtains JobInfo from the database, automatically decompress the files according to the information entered during the upload (see the JobInfo class attribute above), and loads the files to the AppDomain.

Public class AppDomainLoader {// <summary> // load the application, obtain the corresponding instance /// </summary> /// <param name = "dllPath"> </param> /// <param name = "classPath"> </param> /// <param name = "appDomain"> </param> /// <returns> </returns> public static BaseJob Load (string dllPath, string classPath, out AppDomain appDomain) where T: class {AppDomainSetup setup = new AppDomainSetup (); if (System. IO. file. exists ($ "{dllPath }. config ") setup. configurationFile = $ "{dllPath }. config "; setup. shadowCopyFiles = "true"; setup. applicationBase = System. IO. path. getDirectoryName (dllPath); appDomain = AppDomain. createDomain (System. IO. path. getFileName (dllPath), null, setup); AppDomain. monitoringIsEnabled = true; BaseJob obj = (BaseJob) appDomain. createInstanceFromAndUnwrap (dllPath, classPath); return obj ;} /// <summary> /// uninstall the application /// </summary> /// <param name = "appDomain"> </param> public static void UnLoad (AppDomain appDomain) {AppDomain. unload (appDomain); appDomain = null ;}}

 

4. because all jobs inherit the BaseJob class, the entry program in AppDomain is JobInfo. namespace. After reflection instantiation, It is forcibly converted to BaseJob. Then, a JobRuntime object is created and added to JobPoolManager to maintain all running jobs in JobPoolManager.

5. according to JobInfo. taskCron (time expression) creates a Trigger, creates a JobImplement, and adds a JobId to the Context to ensure that the basic information of the Job can be obtained from JobPoolManager during JobImplement Run, and call JobRuntime => BaseJob => Run () to Run the actual job.

  

 public class JobPoolManager:IDisposable    {        private static ConcurrentDictionary<long, JobRuntimeInfo> JobRuntimePool =            new ConcurrentDictionary<long, JobRuntimeInfo>();        private static IScheduler _scheduler;        private static JobPoolManager _jobPollManager;        private JobPoolManager(){}        static JobPoolManager()        {            _jobPollManager = new JobPoolManager();            _scheduler = StdSchedulerFactory.GetDefaultScheduler();            _scheduler.Start();        }        public static JobPoolManager Instance        {            get { return _jobPollManager; }        }        static object _lock=new object();        public bool Add(long jobId, JobRuntimeInfo jobRuntimeInfo)        {            lock (_lock)            {                if (!JobRuntimePool.ContainsKey(jobId))                {                    if (JobRuntimePool.TryAdd(jobId, jobRuntimeInfo))                    {                        IDictionary<string, object> data = new Dictionary<string, object>()                        {                            ["JobId"]=jobId                        };                        IJobDetail jobDetail = JobBuilder.Create<JobImplement>()                            .WithIdentity(jobRuntimeInfo.JobModel.JobName, jobRuntimeInfo.JobModel.Group)                            .SetJobData(new JobDataMap(data))                            .Build();                        var tiggerBuilder = TriggerBuilder.Create()                            .WithIdentity(jobRuntimeInfo.JobModel.JobName, jobRuntimeInfo.JobModel.Group);                        if (string.IsNullOrWhiteSpace(jobRuntimeInfo.JobModel.TaskCron))                        {                            tiggerBuilder = tiggerBuilder.WithSimpleSchedule((simple) =>                            {                                simple.WithInterval(TimeSpan.FromSeconds(1));                            });                        }                        else                        {                            tiggerBuilder = tiggerBuilder                                .StartNow()                                .WithCronSchedule(jobRuntimeInfo.JobModel.TaskCron);                        }                        var trigger = tiggerBuilder.Build();                        _scheduler.ScheduleJob(jobDetail, trigger);                        return true;                    }                }                return false;            }        }        public JobRuntimeInfo Get(long jobId)        {            if (!JobRuntimePool.ContainsKey(jobId))            {                return null;            }            lock (_lock)            {                if (JobRuntimePool.ContainsKey(jobId))                {                    JobRuntimeInfo jobRuntimeInfo = null;                    JobRuntimePool.TryGetValue(jobId, out jobRuntimeInfo);                    return jobRuntimeInfo;                }                return null;            }        }        public bool Remove(long jobId)        {            lock (_lock)            {                if (JobRuntimePool.ContainsKey(jobId))                {                    JobRuntimeInfo jobRuntimeInfo = null;                    JobRuntimePool.TryGetValue(jobId, out jobRuntimeInfo);                    if (jobRuntimeInfo != null)                    {                        var tiggerKey = new TriggerKey(jobRuntimeInfo.JobModel.JobName,                            jobRuntimeInfo.JobModel.Group);                        _scheduler.PauseTrigger(tiggerKey);                        _scheduler.UnscheduleJob(tiggerKey);                        _scheduler.DeleteJob(new JobKey(jobRuntimeInfo.JobModel.JobName,                            jobRuntimeInfo.JobModel.Group));                        JobRuntimePool.TryRemove(jobId, out jobRuntimeInfo);                        return true;                    }                }                return false;            }        }        public virtual void Dispose()        {            if (_scheduler != null && !_scheduler.IsShutdown)            {                foreach (var jobId in JobRuntimePool.Keys)                {                    var jobState = ConnectionFactory.GetInstance<Job.Provider.JobStateRepository>().Get(jobId);                    if (jobState != null)                    {                        jobState.RunState = (int) DirectiveType.Stop;                        jobState.UpdateTime = DateTime.Now;                        ConnectionFactory.GetInstance<Job.Provider.JobStateRepository>().Update(jobState);                    }                }                _scheduler.Shutdown();            }        }    }

 

  

In addition to a web upload interface, we can also create a list of all jobs for Start, Stop, Restart, and so on, the idea is to publish a broadcast to all job scheduling frameworks. Job Scheduling frameworks load, start, stop, and unmount jobs based on broadcast messages.

So far, a basic dynamic job scheduling framework is over.

 

 

 

  

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.