Hadoop version: cloudera hadoop cdh3u3
Procedure:
1. Copy $ hadoop_home/contrib/fairscheduler/hadoop-fairscheduler-0.20.2-cdh3u3.jar to the $ hadoop_home/lib folder.
2. Modify $ hadoop_home/CONF/mapred-site.xml configuration file
<property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/home/hadoop/hadoop-0.20.2-cdh3u3/conf/fair-scheduler.xml</value> </property> <property> <name>mapred.fairscheduler.preemption</name> <value>true</value> </property> <property> <name>mapred.fairscheduler.assignmultiple</name> <value>true</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>mapred.queue.name</value> <description>job.set("mapred.queue.name",pool); // pool is set to either 'high' or 'low' </description> </property> <property> <name>mapred.fairscheduler.preemption.only.log</name> <value>true</value> </property> <property> <name>mapred.fairscheduler.preemption.interval</name> <value>15000</value> </property> <property> <name>mapred.queue.names</name> <value>default,hadoop,hive</value> </property>
3. In $ hadoop_home/CONF/New profile fair-scheduler.xml
<?xml version="1.0"?><allocations><pool name="hive"> <minMaps>90</minMaps> <minReduces>20</minReduces> <maxRunningJobs>20</maxRunningJobs> <weight>2.0</weight> <minSharePreemptionTimeout>30</minSharePreemptionTimeout></pool><pool name="hadoop"> <minMaps>9</minMaps> <minReduces>2</minReduces> <maxRunningJobs>20</maxRunningJobs> <weight>1.0</weight> <minSharePreemptionTimeout>30</minSharePreemptionTimeout></pool><user name="hadoop"> <maxRunningJobs>6</maxRunningJobs></user><poolMaxJobsDefault>10</poolMaxJobsDefault><userMaxJobsDefault>8</userMaxJobsDefault><defaultMinSharePreemptionTimeout>600</defaultMinSharePreemptionTimeout><fairSharePreemptionTimeout>600</fairSharePreemptionTimeout></allocations>
4. perform the preceding steps on each node of the cluster, restart the cluster, and check the running status of the scheduler at http: // namenode: 50030/Scheduler. If you modify the Scheduler configuration, you only need to modify the file fair-scheduler.xml, does not need to restart the configuration to take effect.
5. When executing a hive task, set the queue set mapred. Job. queue. Name = hive of hive;
##########
In addition, if XX users cannot access the YY queue when executing the Mr job, you need to configure the corresponding properties in the mapred-queue-acls.xml to control the access permissions, such:
<property> <name>mapred.queue.default.acl-submit-job</name> <value>*</value> <description> Comma separated list of user and group names that are allowed to submit jobs to the 'default' queue. The user list and the group list are separated by a blank. For e.g. user1,user2 group1,group2. If set to the special value '*', it means all users are allowed to submit jobs. If set to ' '(i.e. space), no user will be allowed to submit jobs. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapred.acls.enabled to true. Irrespective of this ACL configuration, the user who started the cluster and cluster administrators configured via mapreduce.cluster.administrators can submit jobs. </description></property><property> <name>mapred.queue.default.acl-administer-jobs</name> <value>*</value> <description> Comma separated list of user and group names that are allowed to view job details, kill jobs or modify job's priority for all the jobs in the 'default' queue. The user list and the group list are separated by a blank. For e.g. user1,user2 group1,group2. If set to the special value '*', it means all users are allowed to do this operation. If set to ' '(i.e. space), no user will be allowed to do this operation. It is only used if authorization is enabled in Map/Reduce by setting the configuration property mapred.acls.enabled to true. Irrespective of this ACL configuration, the user who started the cluster and cluster administrators configured via mapreduce.cluster.administrators can do the above operations on all the jobs in all the queues. The job owner can do all the above operations on his/her job irrespective of this ACL configuration. </description></property>