For Hadoop multi-job Task parallel processing, tested and configured as follows:
First do the following configuration:
1. Modify Mapred-site.xml Add Scheduler configuration:
<property>
<name>mapred.jobtracker.taskScheduler</name>
<value> Org.apache.hadoop.mapred.fairscheduler</value>
</property>
2. Add jar file address configuration:
<property>
<name>hadoopTest.jar</name>
<value> The address of the generated jar </value>
< /property>
Java Basic code is as follows:
Get each job, the job creation, here is not posted. Job Job_base = (Job) ...;
Job Job_avg = (Job) ...;
Job Job_runcount = (Job) ...;
Job Job_activeuser = (Job) ...;
Job_base.setjarbyclass (Capusedatetimertask.class);
Job_avg.setjarbyclass (Capusedatetimertask.class);
Job_runcount.setjarbyclass (Capusedatetimertask.class);
Job_activeuser.setjarbyclass (Capusedatetimertask.class);
The following three jobs are only started in parallel after executing job_base;
if (Job_base.waitforcompletion (true)) {
Fileutil.hdfsfilehandle (jobbase);
Parallel job
Job_avg.submit ();
Job_runcount.submit ();
Job_activeuser.submit ();
}
Boolean bln1 = Job_avg.iscomplete ();
Boolean BLN2 = Job_runcount.iscomplete ();
Boolean bln3 = Job_activeuser.iscomplete ();
Calculates whether the job is complete
while (!bln1 | |!bln2 | |!bln3) {
BLN1 = Job_avg.iscomplete ();
BLN2 = Job_runcount.iscomplete ();
BLN3 = Job_activeuser.iscomplete ();
}
Finally, assemble the code into the main method and run it using the Hadoop execution command:
The class where the Hadoop Jar Jar Package Name method entry resides
Such as:
Hadoop jar Hadooptest.jar ch03.test test
The parallel state of the job can be monitored via port 50030, which is not much to say.
Explanation:
1, the configuration jar address can solve the package of the generated jar package, the run-time classnotfound problem;
2, to a number of job set Setjarbyclass, tested, if not set this class, the runtime will appear classnotfound error, where Capusedatetimertask is the class name of the main method;
3, WaitForCompletion and submit method is differentiated, waitforcompletion is serial, and submit is parallel, It is precisely because the submit is parallel so that subsequent code operations need to take its execution of the completion of the state to do judgment processing is: Iscomplete ();
4, the above Job use is: Org.apache.hadoop.mapreduce.Job
The above code operation is passed on the standalone/cluster test.