Quartz.net Distributed Application

Source: Internet
Author: User
Tags volatile name database


Quartz.net cluster deployment in detail


tags (space delimited): Quartz.net Job



Recently work to use job, the company's job some do not meet the personal use, so I want to make a job station practice practiced hand, on-line to see a bit, found quartz, so I learned a bit.


First edition


Currently the individual is using ASP. NET Core, developed under core2.0.
The first version of its own simple write a scheduler.


Public static class SchedulerManage
{
        Private static IScheduler _scheduler = null;

        Private static object obj = new object();

        Public static IScheduler Scheduler
        {
            Get
            {
                Var scheduler = _scheduler;
                If (scheduler == null)
                {
                    / / Before this is possible _scheduler was changed by the scheduler or the original value
                    Lock (obj)
                    {
                        / / Here read the latest memory value assigned to the scheduler, to ensure that the latest _scheduler is read
                        Scheduler = Volatile.Read(ref _scheduler);
                        If (scheduler == null)
                        {
                            Scheduler = GetScheduler().Result;
                            Volatile.Write(ref _scheduler, scheduler);
                        }
                    }
                }
                Return scheduler;
            }
        }

        Public static async Task<BaseResponse> RunJob(IJobDetail job, ITrigger trigger)
        {
            Var response = new BaseResponse();
            Try
            {
                Var isExist = await Scheduler.CheckExists(job.Key);
                Var time = DateTimeOffset.Now;
                If (isExist)
                {
                    / / Restore existing tasks
                    Await Scheduler.ResumeJob(job.Key);
                }
                Else
                {
                    Time = await Scheduler.ScheduleJob(job, trigger);
                }
                response.IsSuccess = true;
                response.Msg = time.ToString("yyyy-MM-dd HH:mm:ss");
            }
            Catch (Exception ex)
            {
                response.Msg = ex.Message;
            }
            Return response;

        }


        Public static async Task<BaseResponse> StopJob(JobKey jobKey)
        {
            Var response = new BaseResponse();
            Try
            {
                Var isExist = await Scheduler.CheckExists(jobKey);
                If (isExist)
                {
                    Await Scheduler.PauseJob(jobKey);
                }
                response.IsSuccess = true;
                response.Msg = "Pause success!!";
            }
            Catch (Exception ex)
            {
                response.Msg = ex.Message;
            }
            Return response;
        }

        Public static async Task<BaseResponse> DelJob(JobKey jobKey)
        {
            Var response = new BaseResponse();
            Try
            {
                Var isExist = await Scheduler.CheckExists(jobKey);
                If (isExist)
                {
                    response.IsSuccess = await Scheduler.DeleteJob(jobKey);
                }
            }
            Catch (Exception ex)
            {
                response.IsSuccess = false;
                response.Msg = ex.Message;
            }
            Return response;
        }

        Private static async Task<IScheduler> GetScheduler()
        {
            NameValueCollection props = new NameValueCollection() {
                {"quartz.serializer.type", "binary" }
            };
            StdSchedulerFactory factory = new StdSchedulerFactory(props);
            Var scheduler = await factory.GetScheduler();
            Await scheduler.Start();
            Return scheduler;
        }
}


Simple implementation, dynamic running job, suspend job, add job. After finishing, found that seemingly no problem, as long as they run the job information to find a table storage, as if all OK.



When it was time to release, it suddenly found that more than one reality machine, is through the Nigix reverse proxy. The following questions were suddenly discovered:



1, more than one machine is likely to run on multiple machines.
2, when the deployment, you have to stop the machine, how to re-deploy after the machine is stopped to automatically restore the running job.
3, how to run all jobs in a balanced way.


Personal thoughts at the time

1, the first problem: because it is through Nigix reverse proxy, add job and run job can only fall on a server, basically no problem. Personal control Good Runjob interface, run a time, the jobdetail of the table of the running state to run, there is no more than one machine running at the same time.
2, in the case of the first problem solving, due to the logic of our company's Nigix reverse proxy is: equilibrium strategy. So it's okay to run all the jobs evenly.
3, the point is coming!!!
How do I recover a running job at deployment time?


Because we already have a jobdetail table. Which running jobs can be obtained from the inside. Wome let's get him out of here. Just run it right when the program starts.



Here are the personal implementations:


//HostedService, a service that runs when the host is running
Public class HostedService : IHostedService
{

        Public HostedService(ISchedulerJob schedulerCenter)
        {
            _schedulerJob = schedulerCenter;
        }

        Private ISchedulerJob _schedulerJob = null;

        Public async Task StartAsync(CancellationToken cancellationToken)
        {
            LogHelper.WriteLog("Open Hosted+Env:"+env);
            Var reids= new RedisOperation();
            If (reids.SetNx("RedisJobLock", "1"))
            {
                Await _schedulerJob.StartAllRuningJob();
            }
            reids.Expire("RedisJobLock", 300);
        }

        Public async Task StopAsync(CancellationToken cancellationToken)
        {
            LogHelper.WriteLog("End Hosted");
            Var redis = new RedisOperation();
            If (redis.RedisExists("RedisJobLock"))
            {
                Var count=redis.DelKey("RedisJobLock");
                LogHelper.WriteLog("Delete Reidskey-RedisJobLock result:" + count);
            }
        }
}

    / / Injection characteristics
    [ServiceDescriptor(typeof(ISchedulerJob), ServiceLifetime.Transient)]
    Public class SchedulerCenter : ISchedulerJob
    {
        Public SchedulerCenter(ISchedulerJobFacade schedulerJobFacade)
        {
            _schedulerJobFacade = schedulerJobFacade;
        }

        Private ISchedulerJobFacade _schedulerJobFacade = null;

        Public async Task<BaseResponse> DelJob(SchedulerJobModel jobModel)
        {
            Var response = new BaseResponse();
            If (jobModel != null && jobModel.JobId != 0 && jobModel.JobName != null)
            {
                Response = await _schedulerJobFacade.Modify(new SchedulerJobModifyRequest() { JobId = jobModel.JobId, DataFlag = 0 });
                If (response.IsSuccess)
                {
                    Response = await SchedulerManage.DelJob(GetJobKey(jobModel));
                    If (!response.IsSuccess)
                    {
                        Response = await _schedulerJobFacade.Modify(new SchedulerJobModifyRequest() { JobId = jobModel.JobId, DataFlag = 1 });
                    }
                }
            }
            Else
            {
                response.Msg = "The request parameter is incorrect";
            }
            Return response;
        }

        Public async Task<BaseResponse> RunJob(SchedulerJobModel jobModel)
        {
            If (jobModel != null)
            {
                Var jobKey = GetJobKey(jobModel);

                Var triggleBuilder = TriggerBuilder.Create().WithIdentity(jobModel.JobName + "Trigger", jobModel.JobGroup).WithCronSchedule(jobModel.JobCron).StartAt(jobModel.JobStartTime);
                If (jobModel.JobEndTime != null && jobModel.JobEndTime != new DateTime(1900, 1, 1) && jobModel.JobEndTime == new DateTime(1, 1, 1))
                {
                    triggleBuilder.EndAt(jobModel.JobEndTime);
                }
                triggleBuilder.ForJob(jobKey);
                Var triggle = triggleBuilder.Build();
                Var data = new JobDataMap();
                data.Add("***", "***");
                data.Add("***", "***");
                data.Add("***", "***");
                Var job = JobBuilder.Create<SchedulerJob>().WithIdentity(jobKey).SetJobData(data).Build();
                Var result = await SchedulerManage.RunJob(job, triggle);
                If (result.IsSuccess)
                {
                    Var response = await _schedulerJobFacade.Modify(new SchedulerJobModifyRequest() { JobId = jobModel.JobId, JobState = 1 });
                    If (!response.IsSuccess)
                    {
                        Await SchedulerManage.StopJob(jobKey);
                    }
                    Return response;
                }
                Else
                {
                    Return result;
                }
            }
            Else
            {
                Return new BaseResponse() { Msg = "Job name is empty!!" };
            }

        }

        Public async Task<BaseResponse> StopJob(SchedulerJobModel jobModel)
        {
            Var response = new BaseResponse();
            If (jobModel != null && jobModel.JobId != 0 && jobModel.JobName != null)
            {
                Response = await _schedulerJobFacade.Modify(new SchedulerJobModifyRequest() { JobId = jobModel.JobId, JobState = 2 });
                If (response.IsSuccess)
                {
                    Response = await SchedulerManage.StopJob(GetJobKey(jobModel));
                    If (!response.IsSuccess)
                    {
                        Response = await _schedulerJobFacade.Modify(new SchedulerJobModifyRequest() { JobId = jobModel.JobId, JobState = 2 });
                    }
                }
            }
            Else
            {
                response.Msg = "The request parameter is incorrect";
            }
            Return response;
        }

        Private JobKey GetJobKey(SchedulerJobModel jobModel)
        {
            Return new JobKey($"{jobMo
del.JobId}_{jobModel.JobName}", jobModel.JobGroup);
        }

        Public async Task<BaseResponse> StartAllRuningJob()
        {
            Try
            {
                Var jobListResponse = await _schedulerJobFacade.QueryList(new SchedulerJobListRequest() { DataFlag = 1, JobState = 1, Environment=Kernel.Environment.ToLower() });
                If (!jobListResponse.IsSuccess)
                {
                    Return jobListResponse;
                }
                Var jobList = jobListResponse.Models;
                Foreach (var job in jobList)
                {
                    Await RunJob(job);
                }

                Return new BaseResponse() { IsSuccess = true, Msg = "Start all running jobs successfully when the program starts!!" };
            }
            Catch (Exception ex)
            {
                LogHelper.WriteExceptionLog(ex);
                Return new BaseResponse() { IsSuccess = false, Msg = "Starting all running jobs failed when the program starts!!" };
            }
        }
    }


When the program starts, put all the jobs to run again, among the multiple runs of the use of Redis distributed lock, now start the lock, do not let others run, in the process of unloading the lock release!! Feel no problem, the main problem is that there may be load balance, all hit a server up, barely able to hit the effect quickly. Of course, the high availability is sacrificed first.



The pits are coming again.



As you know, in a slightly larger company, the operations and development are separate, the company uses the Daoker to deploy, when the program stops, will not call
Hostedservice's Stopasync Method!!
At that time the heart is really 10,000 harmony and harmony Pentium and past!!
Individuals are too lazy to pull these things off with OPS. Finally, the final thing is: Set the expiration time of a Redis distributed lock, presumably estimate the time of a deployment, as long as the deployment is direct, the lock can be on the line, and then the interval of each deployment is greater than the lock expiration time. Good trouble, said more are tears!!


Quartz.net distributed cluster using schedule configuration
Public async Task<IScheduler> GetScheduler()
        {
            Var properties = new NameValueCollection();

            Properties["quartz.serializer.type"] = "binary";

            / / Storage type
            Properties["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz";
            / / Indicate the prefix
            Properties["quartz.jobStore.tablePrefix"] = "QRTZ_";
            //Drive type
            Properties["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz";
            //Name database
            Properties["quartz.jobStore.dataSource"] = "SchedulJob";
            //Connection string Data Source = myServerAddress; Initial Catalog = myDataBase; User Id = myUsername; Password = myPassword;
            Properties["quartz.dataSource.SchedulJob.connectionString"] = "Data Source =.; Initial Catalog = SchedulJob; User ID = sa; Password = Ld309402556;";
            //sqlserver version (there is no 20, 21 version under Core)
            Properties["quartz.dataSource.SchedulJob.provider"] = "SqlServer";
            / / Whether cluster, set to true in cluster mode
            Properties["quartz.jobStore.clustered"] = "true";
            Properties["quartz.scheduler.instanceName"] = "TestScheduler";
            / / Set to auto in cluster mode, automatically get the ID of the instance, the id must be different under the cluster, or it will not automatically recover
            Properties["quartz.scheduler.instanceId"] = "AUTO";
            Properties["quartz.threadPool.type"] = "Quartz.Simpl.SimpleThreadPool, Quartz";
            Properties["quartz.threadPool.threadCount"] = "25";
            Properties["quartz.threadPool.threadPriority"] = "Normal";
            Properties["quartz.jobStore.misfireThreshold"] = "60000";
            Properties["quartz.jobStore.useProperties"] = "false";
            ISchedulerFactory factory = new StdSchedulerFactory(properties);
            Return await factory.GetScheduler();
        }

The

is then the test code:


Public async Task TestJob()
        {
            Var sched = await GetScheduler();
            //Console.WriteLine("***** Deleting existing jobs/triggers *****");
            //sched.Clear();


            Console.WriteLine("------- Initialization Complete -----------");


            Console.WriteLine("------- Scheduling Jobs ------------------");

            String schedId = sched.SchedulerName; //sched.SchedulerInstanceId;

            Int count = 1;


            IJobDetail job = JobBuilder.Create<SimpleRecoveryJob>()
                .WithIdentity("job_" + count, schedId) // put triggers in group named after the cluster node instance just to distinguish (in logging) what was scheduled from where
                .RequestRecovery() // ask scheduler to re-execute this job if it was in progress when the scheduler went down...
                .Build();


            ISimpleTrigger trigger = (ISimpleTrigger)TriggerBuilder.Create()
                                                          .WithIdentity("triger_" + count, schedId)
                                                          .StartAt(DateBuilder.FutureDate(1, IntervalUnit.Second))
                                                          .WithSimpleSchedule(x => x.WithRepeatCount(1000).WithInterval(TimeSpan.FromSeconds(5)))
                                                          .Build();
            Console.WriteLine("{0} will run at: {1} and repeat: {2} times, every {3} seconds", job.Key, trigger.GetNextFireTimeUtc(), trigger.RepeatCount, trigger.RepeatInterval.TotalSeconds) ;
            sched.ScheduleJob(job, trigger);

            Count++;


            Job = JobBuilder.Create<SimpleRecoveryJob>()
                .WithIdentity("job_" + count, schedId) // put triggers in group named after the cluster node instance just to distinguish (in logging) what was scheduled from where
                .RequestRecovery() // ask scheduler to re-execute this job if it was in progress when the scheduler went down...
                .Build();

            Trigger = (ISimpleTrigger)TriggerBuilder.Create()
                                           .WithIdentity("triger_" + count, schedId)
                                           .StartAt(DateBuilder.FutureDate(2, IntervalUnit.Second))
                                           .WithSimpleSchedule(x => x.WithRepeatCount(1000).WithInterval(TimeSpan.FromSeconds(5)))
                                           .Build();

            Console.WriteLine(string.Format("{0} will run at: {1} and repeat: {2} times, every {3} seconds", job.Key, trigger.GetNextFireTimeUtc(), trigger.RepeatCount, trigger. RepeatInterval.TotalSeconds));
            sched.ScheduleJob(job, trigger);
            // jobs don’t start firing until start() has been called...
            Console.WriteLine("------- Starting Scheduler ---------------");
            sched.Start();
            Console.WriteLine("------- Started Scheduler ----------------");

            Console.WriteLine("------- Waiting for one hour... ----------");

            Thread.Sleep(TimeSpan.FromHours(1));


            Console.WriteLine("------- Shutting Down --------------------");
            sched.Shutdown();
            Console.WriteLine("------- Shutdown Complete ----------------");
        }


Test add two jobs, every 5s execution.












As you can see in the diagram: Job1 and job2 do not repeat, JOB2 also runs in job1 when I stop Job2.



In this way, the problem of distributed deployment can be realized, and the database structure of quzrtz.net is easy to find and run some.



Capture data graphs for several databases: Basically, some of this information is stored
Jobdetail



Data for triggers



This is the scheduler.



This is a lock.


Next issue:

Description of 1.Job: Stateful job, stateless job.
2.MisFire
3.trigger,cron Introduction
4. The first part of the transformation, their own implementation of a job based on the hostedservice can be distributed scheduling jobs class, in fact, as long as the implementation of this, the other said there is no problem. Row-level locks for tables that are deprecated quartz. Because this concurrency is relatively slow!!

Personal issues


The individual still did not test out the requestrecovery. How to use it!!



Quartz.net Distributed Application


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.