A while ago found that the user submitted hive query and Hadoop job will cause the cluster load is very high, after viewing the configuration, found that many users arbitrarily mapred.child.java.opts set a very large, such as-xmx4096m ( Our default setting is-xmx1024m), resulting in the exhaustion of memory resources on the Tasktracker, which began to constantly swap data on disk, and load soared
When Tasktracker spawn a map/reduce task JVM, it sets the parameters of the JVM according to the values in the user's jobconf, writes to a taskjvm.sh file, and then invokes the Linux command "Bin/bash-c Taskjvm.sh "To execute a task,
Mapred.child.java.opts is one of the parameters for setting up the JVM and has been labeled deprecateded in the new version, replaced by a JVM opts that distinguishes the map task from the reduce task. Mapred.map.child.java.opts and Mapred.reduce.child.java.opts (default value is-xmx200m)
When a user starts a task with the maximum 1G jvm heap size without setting this value, it can cause outofmemory, so the simplest approach is to set a large parameter, and since the value is not final, So the user can override the default values in their own mapred-site.xml. But if a lot of users have unlimited settings, the high load problem comes.
In fact, in the process of constructing the JVM args, there is another admin parameter that overrides the mapreduce.admin.map.child.java.opts of the client settings. Mapreduce.admin.reduce.child.java.opts
After testing, if the same JVM Arg is written in the back, such as "-xmx4000m-xmx1000m", the back will cover the front, "-xmx1000m" will eventually take effect, and in this way, we can have limited control of heap size.
Finally, add in the Mapred-site.xml
<property>
<name>mapreduce.admin.map.child.java.opts</name>
<value>-xmx1024m </value>
</property>
<property>
<name> mapreduce.admin.reduce.child.java.opts</name>
<value>-Xmx1536m</value>
</property >
Constructs the call stack for the child Java opts:
But this way only limits the task JVM heap maximum limit, if the User Hive query optimization is not good enough or will throw oom, in fact, is to throw the problem to the user,
The next step is to see with the user which query will take up so much memory and see if there is any further room for optimization.
This article link http://blog.csdn.net/lalaguozhe/article/details/9076895, reprint please specify