Using hive query for error in Hadoop cluster

Source: Internet
Author: User

Today, when using hive to query the maximum value of a certain analysis data, there is a certain problem, in hive, the phenomenon is as follows:

caused by:java.io.filenotfoundexception://http://slave1:50060/tasklog?attemptid=attempt_201501050454_0006_m_00001_1


Then take a look at the Jobtracker log:

2015-01-05 21:43:23,724 INFO Org.apache.hadoop.mapred.jobinprogress:job_201501052137_0004:nmaps=1 NReduces=1 max=- 12015-01-05 21:43:23,724 INFO Org.apache.hadoop.mapred.JobTracker:Job job_201501052137_0004 added successfully for User ' Hadoop ' to queue ' default ' 2015-01-05 21:43:23,724 INFO org.apache.hadoop.mapred.auditlogger:user=hadoop ip= 192.168.1.193 operation=submit_job target=job_201501052137_0004 result=success2015-01-05 21:43:23,732 INFO or G.apache.hadoop.mapred.jobtracker:initializing job_201501052137_00042015-01-05 21:43:23,732 INFO Org.apache.hadoop.mapred.JobInProgress:Initializing job_201501052137_00042015-01-05 21:43:23,817 INFO Org.apache.hadoop.mapred.JobInProgress:jobToken generated and stored with the users keys in/opt/hadoop-1.0.1/tmp/mapred/ system/job_201501052137_0004/jobtoken2015-01-05 21:43:23,822 INFO Org.apache.hadoop.mapred.JobInProgress:Input Size for job job_201501052137_0004 = 41. Number of splits = 12015-01-05 21:43:23,822 INFO org.apache. hadoop.mapred.JobInProgress:tip:task_201501052137_0004_m_000000 have split on node:/default-rack/slave12015-01-05 21:43:23,822 INFO org.apache.hadoop.mapred.JobInProgress:job_201501052137_0004 locality_wait_factor=0.52015-01-05 21:43:23,822 INFO Org.apache.hadoop.mapred.JobInProgress:Job job_201501052137_0004 initialized successfully with 1 map Tasks and 1 reduce tasks.2015-01-05 21:43:26,140 INFO org.apache.hadoop.mapred.JobTracker:Adding Task (job_setup) ' Attem Pt_201501052137_0004_m_000002_0 ' to tip task_201501052137_0004_m_000002, for tracker ' TRACKER_SLAVE2:127.0.0.1/ 127.0.0.1:3802015-01-05 21:43:29,144 INFO Org.apache.hadoop.mapred.TaskInProgress:Error from attempt_201501052137_ 0004_m_000002_0:error Initializing attempt_201501052137_0004_m_000002_0:java.io.IOException:Exception reading file :/opt/hadoop-1.0.1/tmp/mapred/local/ttprivate/tasktracker/hadoop/jobcache/job_201501052137_0004/jobtoken at Org.apache.hadoop.security.Credentials.readTokenStorageFile (Credentials.java:135) at Org.apache.hadoop.mapreduce.security.TokenCache.loadTokens (tokencache.java:165) at org.apache.h Adoop.mapred.TaskTracker.initializeJob (tasktracker.java:1179) at Org.apache.hadoop.mapred.TaskTracker.localizeJob (tasktracker.java:1116) at org.apache.hadoop.mapred.tasktracker$  5.run (tasktracker.java:2404) at Java.lang.Thread.run (thread.java:744) caused by:java.io.FileNotFoundException:File File:/opt/hadoop-1.0.1/tmp/mapred/local/ttprivate/tasktracker/hadoop/jobcache/job_201501052137_0004/jobtoken        does not exist. At Org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus (rawlocalfilesystem.java:397) at Org.apache.hadoop.fs.FilterFileSystem.getFileStatus (filterfilesystem.java:251) at Org.apache.hadoop.fs.checksumfilesystem$checksumfsinputchecker.<init> (ChecksumFileSystem.java:125) at Org.apache.hadoop.fs.ChecksumFileSystem.open (checksumfilesystem.java:283) at Org.apache.hadoop.fs.FileSystem.open (FileSystem.java:427) at Org.apache.hadoop.security.Credentials.readTokenStorageFile (credentials.java:129) ... 5 more2015-01-05 21:43:29,144 ERROR org.apache.hadoop.mapred.TaskStatus:Trying to set finish time for task attempt_201501 052137_0004_m_000002_0 when no start time was set, StackTrace is:java.lang.Exception at org.apache.hadoop.mapred.t Askstatus.setfinishtime (taskstatus.java:145) at Org.apache.hadoop.mapred.TaskInProgress.incompleteSubTask ( taskinprogress.java:670) at Org.apache.hadoop.mapred.JobInProgress.failedTask (jobinprogress.java:2942) at or G.apache.hadoop.mapred.jobinprogress.updatetaskstatus (jobinprogress.java:1159) at Org.apache.hadoop.mapred.JobTracker.updateTaskStatuses (jobtracker.java:4739) at Org.apache.hadoop.mapred.JobTracker.processHeartbeat (jobtracker.java:3683) at Org.apache.hadoop.mapred.JobTracker.heartbeat (jobtracker.java:3378) at Sun.reflect.GeneratedMethodAccessor3.invoke (Unknown Source) at Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43) at Java.lang.reflect.Method.invoke (method.java:606) at Org.apache.hadoop.ipc.rpc$server.call (rpc.java:563) at O Rg.apache.hadoop.ipc.server$handler$1.run (server.java:1388) at Org.apache.hadoop.ipc.server$handler$1.run ( server.java:1384) at java.security.AccessController.doPrivileged (Native Method) at Javax.security.auth.Subje Ct.doas (subject.java:415) at Org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1093 ) at Org.apache.hadoop.ipc.server$handler.run (server.java:1382) 2015-01-05 21:43:29,146 INFO Org.apache.hadoop.mapre D.jobtracker:adding Task (job_setup) ' attempt_201501052137_0004_r_000002_0 ' to tip task_201501052137_0004_r_000002, For tracker ' tracker_slave2:127.0.0.1/127.0.0.1:3802015-01-05 21:43:29,146 INFO org.apache.hadoop.mapred.JobTracker : Removing task ' attempt_201501052137_0004_m_000002_0 ' 2015-01-05 21:43:32, 154 INFO Org.apache.hadoop.mapred.TaskInProgress:Error from Attempt_201501052137_0004_r_000002_0:error Initializing attempt_201501052137_0004_r_000002_0:java.io.IOException:Exception reading FILE:/OPT/HADOOP-1.0.1/ Tmp/mapred/local/ttprivate/tasktracker/hadoop/jobcache/job_201501052137_0004/jobtoken at Org.apache.hadoop.security.Credentials.readTokenStorageFile (credentials.java:135) at Org.apache.hadoop.mapreduce.security.TokenCache.loadTokens (tokencache.java:165) at Org.apache.hadoop.mapred.TaskTracker.initializeJob (tasktracker.java:1179) at Org.apache.hadoop.mapred.TaskTracker.localizeJob (tasktracker.java:1116) at org.apache.hadoop.mapred.tasktracker$  5.run (tasktracker.java:2404) at Java.lang.Thread.run (thread.java:744) caused by:java.io.FileNotFoundException:File File:/opt/hadoop-1.0.1/tmp/mapred/local/ttprivate/tasktracker/hadoop/jobcache/job_201501052137_0004/jobtoken        does not exist. At Org.apache.hadoop.fs.RawLocalFileSystem.getFilestatus (rawlocalfilesystem.java:397) at Org.apache.hadoop.fs.FilterFileSystem.getFileStatus ( filterfilesystem.java:251) at Org.apache.hadoop.fs.checksumfilesystem$checksumfsinputchecker.<init> ( CHECKSUMFILESYSTEM.JAVA:125)
From this log, it is very likely that TMP does not have permission to wear a temporary file when the MapReduce

View the previous temp file directory and view Core-site.xml inside:

Hadoop.tmp.dir under/opt/hadoop-1.0.1/tmp/, the file is given the owner of Hadoop and 750 permissions, but there may be some problems.
Make a change to the file:
Hadoop.tmp.dir to/home/hadoop/temp
Reformat HDFs, restart, use hive query, the problem disappears

Using hive query for error in Hadoop cluster

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.