topology in the server submission process, a series of validation and initialization: TP structure Check, create a local directory and copy the serialized file Jar package, generate znode used to hold the information such as TP and task, the last step of the task assignment, such as:
The commit main function is in Servicehandler.java
private void Makeassignment (String topologyname, String Topologyid, Topologyinitialstatus status) throws failedassigntopologyexception {//1, Create topology allocation event Topologyassignevent assignevent = new Topologyassignevent (); Assignevent.settopologyid (Topologyid); Assignevent.setscratch (false); Assignevent.settopologyname (TopologyName); Assignevent.setoldstatus (Thrift.topologyinitialstatustostormstatus (status)); 2, throw into the event processing queue Topologyassign.push (assignevent); 3. Wait Time returns Boolean issuccess = Assignevent.waitfinish (), if (issuccess = = True) {Log.info ("Finish submit for" + Topologynam e);} else {throw new Failedassigntopologyexception (Assignevent.geterrormsg ());}}
The most important of these is that the event is dropped into the queue after the subsequent processing process, the event is allocated by the topologyassign thread processing, the process of the thread is very clear, listen to the event queue, once the event entered, immediately take out, to do dotopologyassignment, as follows:
public void Run () {log.info ("Topologyassign thread has been started"); Runflag = True;while (runflag) {topologyassignevent Event;try {event = Queue.take ();} catch (Interruptedexception E1) {continue;} if (event = = null) {continue;} Boolean issuccess = Dotopologyassignment (event);.....}
The core code of the task assignment is in Topologyassign.java
Public assignment Mkassignment (Topologyassignevent event) throws Exception {String Topologyid = Event.gettopologyid (); Log.info ("Determining Assignment for" + Topologyid); Topologyassigncontext context = Preparetopologyassign (event); set<resourceworkerslot> assignments = null;if (! Stormconfig.local_mode (nimbusdata.getconf ())) {Itoplogyscheduler Scheduler = Schedulers.get (default_scheduler_name );//start the job scheduling assignments = scheduler.assigntasks (context);} else {assignments = mklocalassignment (context);} ............}
The call stack is as follows:
The allocation principle is to first obtain all available supervisor, judging whether the supervisor available standard is to have an idle slot, That is, whether all supervisor.slots.ports specified ports are occupied, and then calculate the need to allocate a few woker, because a woker corresponding to a port, of course, this information is collected from the zookeeper, now we analyze the allocation of core code:
Workermaker.java
Note that the result is the slot required for this job, only know the number of slots required before passing in, and specifically assigned to which supervisor has not been specified
Supervisors refers to all the available supervisor in the current cluster, that is, an idle port
private void Putworkertosupervisor (List<resourceworkerslot> result,list<supervisorinfo> supervisors) { int key = 0;//is traversed by the desired slot, each time assigning a for (Resourceworkerslot Worker:result) {//First make the necessary judgment and position if (supervisors.size () = = 0) return;if (Worker.getnodeid () = null) continue;if (key >= supervisors.size ()) key = 0;//1, remove first Supervisorsupervisorinfo Supervisor = Supervisors.get (key); Worker.sethostname (Supervisor.gethostname ()); Worker.setnodeid ( Supervisor.getsupervisorid ()); Worker.setport (Supervisor.getworkerports (). iterator (). Next ());//The slot is removed from the collection when it is exhausted, No longer participates in assigning supervisor.getworkerports (). Remove (Worker.getport ()); if (Supervisor.getworkerports (). Size () = = 0) Supervisors.remove (supervisor);//When a supervisor is allocated, it will no longer be used unless supervisor is not enough key++;}}
From the above code we can see that the current slot allocation does not consider the machine load, the allocation of slots is not necessarily average, such as the first supervisor has 10 slots, the remaining supervisor only two, then each supervisor assigned a woker. Note A problem in the above code that supervisors this collection is sorted, with the following collation:
private void Putallworkertosupervisor (List<resourceworkerslot> result,list<supervisorinfo> Supervisors ) {.... Supervisors = This.getcanusesupervisors (supervisors);; Collections.sort (Supervisors, new comparator<supervisorinfo> () {@Overridepublic int compare (Supervisorinfo O1, Supervisorinfo O2) {//TODO auto-generated method Stubreturn-numberutils.compare (O1.getworkerports (). Size (), O2.getworkerports (). Size ());}); This.putworkertosupervisor (result, supervisors), ......}
As you can see, the current collation is the number of slots per slot, and we may consider some of the factors in the machine load in subsequent releases.
Topology dispatch of Jstorm