Kylin a lot of different jobs are generated during the run, how are these jobs scheduled for execution? This article analyzes how Kylin is performing concurrent execution of multiple tasks from the source level, and how to use this multithreaded model in other projects.
initialization Process First, Kylin uses the spring framework to provide a restful interface, Jobcontroller This controller implements the Initializingbean interface, This means that when the bean is initialized, spring invokes the Afterpropertiesset method implemented by this class to perform the initialization operation. The Kylin configuration file is loaded in the initialization function (the file name is kylin.properties), the load of the configuration file looks for the environment variable kylin_conf of the JVM, and if it does not, further finds Kylin_home and finds the CONF directory below it. The configuration file is then loaded, and the Kylin.properties.override file under that directory is also found to be present, overwriting the configuration of the Kylin.properties file if there is a configuration for the latter. is divided into three modes of operation in Kylin server (configurable by Kylin.server.mode configuration, all by default), all, job, and query, the first two of which can perform tasks. In query mode, Kylin server only provides metadata operations and SQL queries, and cannot perform tasks such as building a cube, merging cubes, and so on. So you can see that only in the first two modes, the function starts a thread that creates a Defaultscheduler object that is globally unique and then executes the Init method of that object. init function will first need to get a lock from the zookeeper, the lock is mutually exclusive, the same zookeeper path can only be held by a Kylin server instance, This lock is co-identified by HBase and zookeeper, which means that different Kylin servers must use different HBase metadata tables. The address of the zookeeper is identified by the Kylin.storage.url configuration item, specifying quorum and ports, and then finding the Kylin server using metadata based on the Kylin.metadata.url configuration item stored in the HBase table. The default HBase table is Kylin_metadata, if this configuration item is hbase: It starts with this configuration, otherwise the default table is used, and the lock path is the/kylin/job_engine/lock/hbase table name. A thread pool is initialized after a successful acquisition of the lock:
Gets the Manager Object Executablemanager = executablemanager.getinstance (Jobengineconfig.getconfig ()) based on the configuration object; Create a thread pool of size 1 that periodically schedules the threads to see if there are any tasks that can be performed. Fetcherpool = Executors.newscheduledthreadpool (1); The size of the thread pool that really dispatches the task execution, the default is 10, and the queue used is not the maximum limit. int corepoolsize = Jobengineconfig.getmaxconcurrentjoblimit (); Jobpool = new Threadpoolexecutor (corepoolsize, Corepoolsize, Long.max_value, Timeunit.days, new SynchronousQueue< Runnable> ()); All the tasks that are being performed are saved in the context. context = new Defaultcontext (maps.<string, executable> newconcurrentmap (), Jobengineconfig.getconfig ()); The task of getting all the ready states from the metabase is set to error for the next reschedule execution. For (abstractexecutable executable:executableManager.getAllExecutables ()) {if (executable.getstatus () = = Exe Cutablestate.ready) {executablemanager.updatejoboutput (Executable.getid (), executablestate.error, NULL, "s Cheduler initializing work-to-reset job to ERROR status ");}}//The job with all running status is set to error for re-scheduling. Executablemanager.updateallrunningjobstoerror (); The process exits with the destruction of two thread pools, releasing the lock on Zookeeper Runtime.getruntime (). Addshutdownhook (New Thread () {public void run () { Logger.debug ("Closing ZK connection"); try {shutdown (); } catch (Schedulerexception e) {logger.error ("Error Shutdown Scheduler", e); } } }); The Fetcherrunner thread is a thread that periodically checks to see if other tasks can be executed, the first time the delay is 10 seconds, and the next 60 seconds are dispatched. Fetcher = new Fetcherrunner (); Fetcherpool.scheduleatfixedrate (Fetcher, ten, Executableconstants.default_scheduler_interval_seconds, Timeunit.seconds); Hasstarted = true;
jobrunnable Threads
in Kylin, the status of each job is in one of the following ways:Ready ,RUNNING,ERROR,STOPPED,Discarded,succeed; only threads that are in the ready state may be scheduled to execute, each job will start a thread execution, the thread's object is Jobrunner, and the thread is placed in the Jobpool thread pool to dispatch execution.
Private class Jobrunner implements Runnable { private final abstractexecutable executable; Public Jobrunner (abstractexecutable executable) { this.executable = executable; } @Override public Void Run () { try { //Execute Job's handler function Executable.execute (context); The query that triggers the next task after execution completes, rather than waiting for the next 60 seconds. fetcherpool.schedule (fetcher, 0, timeunit.seconds); } catch (Executeexception e) { logger.error ("executeexception job:" + Executable.getid (), E); } catch (Exception e) { logger.error ("Unknown error Execute Job:" + Executable.getid (), E); } finally { context.removerunningjob (executable);}}}
Jobrunner constructors are executable jobs in Kylin, which inherit from abstractexecutable virtual classes, which define virtual functions protected abstract Executeresult doWork ( Executablecontext context) throws executeexception instead of the Execute function. The class implements the Execute function, which is implemented as follows:
Public final Executeresult Execute (executablecontext executablecontext) throws Executeexception { //print a Eye-catching title in log Logtitleprinter.printtitle (This.getname ()); Preconditions.checkargument (Executablecontext instanceof defaultcontext); Executeresult result; try { Onexecutestart (executablecontext); result = DoWork (Executablecontext); } catch (Throwable e) { logger.error ("Error running executable", e); Onexecuteerror (E, executablecontext); throw new Executeexception (e); } Onexecutefinished (result, executablecontext); return result; }
As you can see, the entry for all tasks is the Execute function, which invokes the common operation of all tasks, such as calling the Onexecutestart function before executing a task, performing a task call to the DoWork function, and invoking the Onexecuteerror function when an exception occurs. The onexecutefinished function is called after the execution of the non-throwing exception, and these functions can be fulfilled by the specific job requirements. First of all, we analyze this task scheduling model, started two thread pool, thread pool A can only hold one thread, this thread is periodic check the task queue for an executable task, if there is an executable task will it as a parameter to create a new thread object and to the thread pool B scheduled execution, Thread Pool B is dispatched to this thread when the run function is executed, and the function performs the logic of the task, but the overall execution logic of these tasks is the same (start is executed first, then DoWork finally executes finish), so all inherit from a virtual class, In the run function called the virtual Class A function, a function within the implementation of the overall logic (start, doWork, Finish, etc.), and then by different sub-class implementation of different start, doWork, finish and other specific functions have implemented different logic.
Kylin Job DescriptionThe inheritance relationships of the Abstractexecutable class are as follows:
Where basetestexecutable we don't care, defaultchainedexecutable is a job for chaining multiple jobs, which can link multiple jobs internally, It then executes sequentially according to the order in which the job is joined, and its subclasses include Cubingjob and Iijob, respectively, for the purpose of building a normal cube and building an inverted index (which is not used for the time being). hadoopshellexecutable is used to perform a mapreduce task similar to that submitted under a Hadoop shell, and its parameters must contain a MapReduce job class. It then executes the MapReduce task and waits for the task to be executed synchronously, checking that the task is successful. mapreduceexecutable is also designed to perform a mapreduce task, But unlike Hadoopshellexecutable, which executes the committed MapReduce task asynchronously, it does not wait for the task to complete, but returns immediately after the commit is completed. It then periodically accesses the URL of the Hadoop task State (based on the ResourceManager address) to see the execution status of the task, and, if done, calls Getcounter to get some statistics about the task, Check the status of the delay can be set by the Kylin.job.yarn.app.rest.check.interval.seconds configuration, the default is 60s, so in Kylin can be seen for ordinary tasks generally use hadoopshellexecutable execution, and for the off The task of the key (the task in the cube build process) uses mapreduceexecutable to perform the MapReduce task. hqlexecutable is used to execute multiple hive SQL statements, and statements are passed through configuration items, and the only place where you can currently see the execution of Hive SQL is that the first step in creating a cube requires the raw data to be fetched from hive. However, instead of using hqlexecutable, you choose to invoke the shell command directly hive-e execution. shellexecutable is used to execute a shell command, such as the execution hive-e just mentioned, to perform the generation of the raw data. Currently the shell can support commands to execute on the current host or on the remote host, by KYLIN.JOb.run.as.remote.cmd configuration determines whether to execute at the remote end, kylin.job.remote.cli.hostname specifies the host name of the remote host, Kylin.job.remote.cli.username and kylin.job.remote.c Li.password Specify the login username and password, the implementation is SSH to the remote host and then execute the corresponding command. updatecubeinfoafterbuildstep and Updatecubeinfoaftermergestep are the last steps to complete before building the cube and merging the cube, This step is mainly to update some statistics, such as the size of the cube, read and write HDFs size.
defaultchainedexecutable Execution Process
The main use of mapreduceexecutable in Kylin is to perform the task of building the cube, and defaultchainedexecutable in these classes is more special because it does not itself perform the task's logic. Instead, it is the equivalent of multiple job-specific containers that execute these jobs sequentially, and look at this class. In the Onexecutestart function it sets its state to running, and the DoWork function is as follows:
@Override protected Executeresult doWork (Executablecontext context) throws executeexception { list<? Extends executable> executables = gettasks (); final int size = Executables.size (); for (int i = 0; i < size; ++i) { executable subTask = Executables.get (i); if (subtask.isrunnable ()) { return Subtask.execute (context); } } return new Executeresult (ExecuteResult.State.SUCCEED, null); }
It can be seen that it gets the job sequentially from the executables array and then checks to see if the job can be executed (the state is ready), but it is surprising that after each job execution it does not dispatch the next job, it returns directly, This means that it only executes the first executable job in the executables list, what happens next? The secret of this is that each successful execution of a job will call the job's onexecutefinished function, according to the above logic, each execution of a job will jump out of the DoWork function execution onexecutefinished function, In Defaultchainedexecutable's onexecutefinished function, it checks the execution status of each task sequentially, marking the entire job execution failure if the most recent task fails, and checking if all tasks are successful if successful , mark the entire task as successful, and then check if any of the tasks failed to execute, and if so, mark this as a failure (this step is theoretically not required because each job is checked after it is completed), If there is no job failure and no complete execution succeeds, mark itself as ready again to return. Although only the first executable job in the Executables list is executed at a time, it is marked as ready after each execution. Recall that the Jobrunner thread immediately dispatched the next time after each job executes the EXECUTE function to see if there is a ready job. This allows the next job execution to be dispatched immediately after the completion of the previous job in the Defaultchainedexecutable object (because the state of the previous task is no longer ready), and provides the logic to check the completion status of each task execution completion. This kind of structure is still quite ingenious.
SummaryThe above describes the different types of jobs in Kylin and the more complex defaultchainedexecutable of the specific scheduling process, and kylin the entire task scheduling framework has a certain understanding, this design is also worthy of our learning place, The design of the backend is often met with the need for this kind of task execution, which can be done in this way.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Kylin Task Scheduling Module