Version information: Hadoop 2.3.0 hive 0.11.0
1. Application Master Cannot access
Click application Mater Link, an HTTP 500 error occurs, Java.lang.Connect.exception: The problem is that the 50030 port IP address is 0.0 0.0 when the Web UI is set, causing applicatio The n Master link cannot be positioned.
Workaround: Yarn-site.xml file <property> <description>the address of the RM Web Application.</de Scription> <name>yarn.resourcemanager.webapp.address</name> <value>
xxxxxxxxxx:50030</value> </property> This is a bug inside the 2.3.0 1811, 2.4.0 has been repaired
2. History UI inaccessible and container not open click tracking url:history inaccessible problem is History service did not start the solution: With Placement: Selection (
xxxxxxxxxx: as History sever) <property> <name>yarn.log-aggregation-enable</name> <valu e>true</value> </property> <property> <name>mapreduce.jobhistory.address</nam e> <value> xxxxxxxxxx::10020</value> </property>
<property> <name>mapreduce.jobhistory.webapp.address</name> <value> XXXXXXXXXX:19888&L T;/value> </property>
sbin/mr-jobhistory-daemon.shStart HistoryserverRELATED Links: http://www.iteblog.com/archives/936
3 Yarn Platform Optimization Set the number of virtual CPUs <property> <name >yarn.nodemanager.resource.cpu-vcores</name> <value >23</value> </property> settings used memory <property> <name> Yarn.nodemanager.resource.memory-mb</name> <value> 61440</value> <description>the amount of memory on the NodeManager in GB</description> < /property> set maximum memory <property> for each task <name>yarn.scheduler.maximum-allocation-mb</name> <value>49152</value> <source>yarn-default.xml</source > </property>
4 Run Task tip: found interface Org.apache.hadoop.mapreduce.Counter, but class was expected modify Pom, re install &NBSP ; <dependency> <groupId>org.apache.hadoop</groupId> & nbsp <artifactId>hadoop-common</artifactId> <version>2.3.0</version> </dependency> < dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-mapreduce-client-core</artifactId> <version>2.3.0</version> </dependency> <dependency> <groupId> Org.apache.mrunit</groupid> &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSp; <artifactid>mrunit</artifactid> <version>1.0.0</version > < classifier> hadoop2</classifier> <scope>test</scope> </dependency> JDK replaced by 1.7
5 Run task prompts shuffle memory overflow Java heap space 2014-05-14 16:44:22,010 FATAL [IPC Server handler 4 on 44508] Org.apache.hadoop.mapre D.taskattemptlistenerimpl:task:attempt_1400048775904_0006_r_000004_0-exited: Org.apache.hadoop.mapreduce.task.reduce.shuffle$shuffleerror:error in Shuffle, Fetcher#3 at Org.apache.hadoop.mapreduce.task.reduce.Shuffle.run (shuffle.java:134) at Org.apache.hadoop.mapred.ReduceTask.run (reducetask.java:376) at Org.apache.hadoop.mapred.yarnchild$2.run (yarnchild.java:168) at Java.security.AccessController.doPrivileged (Native method) at Javax.security.auth.Subject.doAs ( subject.java:415) at Org.apache.hadoop.security.UserGroupInformation.doAs ( usergroupinformation.java:1548) at Org.apache.hadoop.mapred.YarnChild.main (yarnchild.java:163) caused By:java.lang.OutOfMemoryError:Java heap at Org.apache.hadoop.io.BoundEdbytearrayoutputstream.<init> (boundedbytearrayoutputstream.java:56) at Org.apache.hadoop.io.boundedbytearrayoutputstream.<init> (boundedbytearrayoutputstream.java:46) at Org.apache.hadoop.mapreduce.task.reduce.inmemorymapoutput.<init> (inmemorymapoutput.java:63 ) at Org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve ( mergemanagerimpl.java:297) at Org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve (mergemanagerimpl.java:287) at Org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput ( fetcher.java:411) at Org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost (Fetcher.java : 341) at Org.apache.hadoop.mapreduce.task.reduce.Fetcher.run (fetcher.java:165)
Source: Workaround: Lower mapreduce.reduce.shuffle.memory.limit.percent value defaults to 0.25 and now 0.10.
Reference: Http://www.sqlparty.com/yarn%E5%9C%A8shuffle%E9%98%B6%E6%AE%B5%E5%86%85%E5%AD%98%E4%B8%8D%E8%B6%B3%E9%97%AE %e9%a2%98error-in-shuffle-in-fetcher/
6 Reduce Task Log middle discovery:
2014-05-14 17:51:21,835 WARN [readahead Thread #2] org.apache.hadoop.io.ReadaheadPool:Failed readahead on IFile einval:i Nvalid argument at Org.apache.hadoop.io.nativeio.nativeio$posix.posix_fadvise (Native method) at Org.apache.hadoop. Io.nativeio.nativeio$posix.posixfadviseifpossible (nativeio.java:263) at org.apache.hadoop.io.nativeio.nativeio$ Posix$cachemanipulator.posixfadviseifpossible (nativeio.java:142) at org.apache.hadoop.io.readaheadpool$ Readaheadrequestimpl.run (readaheadpool.java:206) at Java.util.concurrent.ThreadPoolExecutor.runWorker ( threadpoolexecutor.java:1145) at Java.util.concurrent.threadpoolexecutor$worker.run (ThreadPoolExecutor.java:615) At Java.lang.Thread.run (thread.java:745)
Source:
PS: Error does not reproduce, there is no solution
7 Hive Mission
Java.lang.InstantiationException:org.antlr.runtime.CommonToken continuing ... java.lang.RuntimeException:failed to Evaluate: <unbound>=class.new (); Reference: https://issues.apache.org/jira/browse/HIVE-4222s
8 Hive task automatically replace the join into the mapjoin memory overflow, the solution: Turn off automatic loading, the version of 11 before the default value is False, followed by true; In the task script, add: Set Hive.auto.convert.join=false; or hive-site.xml with false; error log: Slf4j:actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2014-05-15 02:40:58 Starting to launch the local task to process map join; maximum memory = 1011351552 2014-05-15 02:41:00 processing rows: &N bsp;200000 hashtable size:199999 memory usage: 110092544 Rate: 0.109 2014- 05-15 02:41:01 processing rows: 300000 hashtable size:299999 memory Usage: 229345424 Rate: 0.227 2014-05-15 02:41:01 processing rows: 400000 hashtable size:399999 memory usage: 170296368 Rate: &N Bsp 0.168 2014-05-15 02:41:01 processing rows: 500000 hashtable size:499999 memory usage: 285961568 Rate: 0.283 2014-05-1 5 02:41:02 processing rows: 600000 hashtable size:599999 memory USAG E: 408727616 Rate: 0.404 2014-05-15 02:41:02 processing rows: &NBS P 700000 hashtable size:699999 memory usage: 333867920 rate: 0. 2014-05-15 02:41:02 processing rows: 800000 hashtable size:799999   ; Memory usage: 459541208 Rate: 0.454 2014-05-15 02:41:03 processing rows: 900000 hashtable size:899999 memory usage: 391524456 R Ate: 0.387
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.