Problem
{:timestamp=> "2015-03-04t00:02:47.224000+0800",:message=> "retrying Webhdfs write for multiple times. Maybe you should increase retry_interval or reduce number of workers. ",: Level=>:warn}
{:timestamp=> "2015-03-04t00:02:47.751000+0800",:message=> "retrying Webhdfs write for multiple times. Maybe you should increase retry_interval or reduce number of workers. ",: Level=>:warn}
{:timestamp=> "2015-03-04t00:02:48.788000+0800",:message=> "retrying Webhdfs write for multiple times. Maybe you should increase retry_interval or reduce number of workers. ",: Level=>:warn}
{:timestamp=> "2015-03-04t00:02:50.325000+0800",:message=> "retrying Webhdfs write for multiple times. Maybe you should increase retry_interval or reduce number of workers. ",: Level=>:warn}
{:timestamp=> "2015-03-04t00:02:52.361000+0800",:message=> "Max write retries reached. Exception: {\ "remoteexception\": {\ "exception\": \ "alreadybeingcreatedexception\", \ "javaclassname\": \ " Org.apache.hadoop.hdfs.protocol.alreadybeingcreatedexception\ ", \" message\ ": \" Failed to create file [/user/yimr/ 2015-03-03/16.log] for [dfsclient_nonmapreduce_1517029404_16] for client [192.168.2.207], because this file is already be ing created by [dfsclient_nonmapreduce_-190688369_16] on [192.168.2.207]\\n\\tat Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal (fsnamesystem.java:2636) \\n\\tat Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal (fsnamesystem.java:2462) \\n\\tat Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt (fsnamesystem.java:2700) \\n\\tat Org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile (fsnamesystem.java:2663) \\n\\tat Org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append (namenoderpcserver.java:559) \\n\\tat Org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append ( clientnamenodeprotocolserversidetranslatorpb.java:388) \\n\\tat Org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod (Clientnamenodeprotocolprotos.java) \\n\\tat org.apache.hadoop.ipc.protobufrpcengine$server$ Protobufrpcinvoker.call (protobufrpcengine.java:585) \\n\\tat Org.apache.hadoop.ipc.rpc$server.call (RPC.java:928) \\n\\tat Org.apache.hadoop.ipc.server$handler$1.run (server.java:2013) \\n\\tat org.apache.hadoop.ipc.server$ Handler$1.run (server.java:2009) \\n\\tat java.security.AccessController.doPrivileged (Native Method) \\n\\tat Javax.security.auth.Subject.doAs (subject.java:415) \\n\\tat org.apache.hadoop.security.UserGroupInformation.doAs (usergroupinformation.java:1614) \\n\\tat Org.apache.hadoop.ipc.server$handler.run (server.java:2007) \\n\ "}}",: Level=>:error}
Reason:
1 machines, data backup default is set to 3, through Webhdfs write error. Modify the number of backups to 1 normal.
Webhdfs Append Write HDFs exception