Workaround for the "dfs.support.append" parameter is not supported for Hadoop 1.2. Version 1

Source: Internet
Author: User

Recently in the test Hadoop+fluentd scenario, but the Fluentd log collection system, which requires append feature selection, writes logs into HDFs, the official solution is:

Modify the Hdfs-site.xml file to add the following line:

<property> <name>dfs.webhdfs.enabled</name> <value>true</value></property> <property> <name>dfs.support.append</name> <value>true</value></property> <property> <name>dfs.support.broken.append</name> <value>true</value></property >


However, the format Namenode node will error:

[Email protected] ~]$ Hadoop Namenode-format

14/12/25 10:35:25 INFO Namenode. Namenode:startup_msg:

/************************************************************

Startup_msg:starting NameNode

Startup_msg:host = node1.test.com/172.16.41.151

Startup_msg:args = [-format]

Startup_msg:version = 1.2.1

Startup_msg:build = Https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r 1503152; Compiled by ' Mattf ' on Mon Jul 15:23:09 PDT 2013

Startup_msg:java = 1.7.0_67

************************************************************/

14/12/25 10:35:25 INFO util. Gset:computing Capacity for map Blocksmap

14/12/25 10:35:25 INFO util. GSET:VM type = 64-bit

14/12/25 10:35:25 INFO util. gset:2.0% Max memory = 932184064

14/12/25 10:35:25 INFO util. gset:capacity = 2^21 = 2097152 entries

14/12/25 10:35:25 INFO util. gset:recommended=2097152, actual=2097152

14/12/25 10:35:26 INFO Namenode. Fsnamesystem:fsowner=hadoop

14/12/25 10:35:26 INFO Namenode. Fsnamesystem:supergroup=supergroup

14/12/25 10:35:26 INFO Namenode. Fsnamesystem:ispermissionenabled=true

14/12/25 10:35:26 INFO Namenode. fsnamesystem:dfs.block.invalidate.limit=100

14/12/25 10:35:26 WARN Namenode. Fsnamesystem:the dfs.support.append option is in your configuration, however append are not supported. This configuration option was no longer required to enable sync

14/12/25 10:35:26 INFO Namenode. Fsnamesystem:isaccesstokenenabled=false accesskeyupdateinterval=0 min (s), Accesstokenlifetime=0 min (s)

14/12/25 10:35:26 INFO Namenode. FSEditLog:dfs.namenode.edits.toleration.length = 0

14/12/25 10:35:26 INFO Namenode. namenode:caching file names occuring more than times

14/12/25 10:35:26 INFO Common. Storage:image file/usr/local/hadoop_tmp/dfs/name/current/fsimage of size bytes saved in 0 seconds.

14/12/25 10:35:26 INFO Namenode. Fseditlog:closing Edit log:position=4, editlog=/usr/local/hadoop_tmp/dfs/name/current/edits

14/12/25 10:35:26 INFO Namenode. Fseditlog:close success:truncate to 4, editlog=/usr/local/hadoop_tmp/dfs/name/current/edits

14/12/25 10:35:26 INFO Common. Storage:storage Directory/usr/local/hadoop_tmp/dfs/name has been successfully formatted.

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/57/79/wKioL1Sbfb2y6VvBAAZbx8HGAuE281.jpg "title=" Snap1.jpg "alt=" Wkiol1sbfb2y6vvbaazbx8hgaue281.jpg "/>

Search the internet for half a day, the best solution is as follows:


Use the following statement:

<property> <name>dfs.support.broken.append</name> <value>true</value> </property>


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/57/7B/wKiom1SbfxCBNHI_AACsX3Y9ccg542.jpg "title=" Snap2.jpg "alt=" Wkiol1sbfnhdzgkwaadrowcah2w937.jpg "/>


Then format the Namenode again, will not error!


[Email protected] conf]$ Hadoop Namenode-format

14/12/25 10:47:18 INFO Namenode. Namenode:startup_msg:

/************************************************************

Startup_msg:starting NameNode

Startup_msg:host = node1.test.com/172.16.41.151

Startup_msg:args = [-format]

Startup_msg:version = 1.2.1

Startup_msg:build = Https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r 1503152; Compiled by ' Mattf ' on Mon Jul 15:23:09 PDT 2013

Startup_msg:java = 1.7.0_67

************************************************************/

Re-format filesystem In/usr/local/hadoop_tmp/dfs/name? (Y or N) Y

14/12/25 10:47:21 INFO util. Gset:computing Capacity for map Blocksmap

14/12/25 10:47:21 INFO util. GSET:VM type = 64-bit

14/12/25 10:47:21 INFO util. gset:2.0% Max memory = 932184064

14/12/25 10:47:21 INFO util. gset:capacity = 2^21 = 2097152 entries

14/12/25 10:47:21 INFO util. gset:recommended=2097152, actual=2097152

14/12/25 10:47:21 INFO Namenode. Fsnamesystem:fsowner=hadoop

14/12/25 10:47:21 INFO Namenode. Fsnamesystem:supergroup=supergroup

14/12/25 10:47:21 INFO Namenode. Fsnamesystem:ispermissionenabled=true

14/12/25 10:47:21 INFO Namenode. fsnamesystem:dfs.block.invalidate.limit=100

14/12/25 10:47:21 INFO Namenode. Fsnamesystem:isaccesstokenenabled=false accesskeyupdateinterval=0 min (s), Accesstokenlifetime=0 min (s)

14/12/25 10:47:21 INFO Namenode. FSEditLog:dfs.namenode.edits.toleration.length = 0

14/12/25 10:47:21 INFO Namenode. namenode:caching file names occuring more than times

14/12/25 10:47:21 INFO Common. Storage:image file/usr/local/hadoop_tmp/dfs/name/current/fsimage of size bytes saved in 0 seconds.

14/12/25 10:47:21 INFO Namenode. Fseditlog:closing Edit log:position=4, editlog=/usr/local/hadoop_tmp/dfs/name/current/edits

14/12/25 10:47:21 INFO Namenode. Fseditlog:close success:truncate to 4, editlog=/usr/local/hadoop_tmp/dfs/name/current/edits

14/12/25 10:47:21 INFO Common. Storage:storage Directory/usr/local/hadoop_tmp/dfs/name has been successfully formatted.

14/12/25 10:47:21 INFO Namenode. Namenode:shutdown_msg:

/************************************************************

Shutdown_msg:shutting down NameNode at node1.test.com/172.16.41.151

************************************************************/


This article is from the "Shine_forever blog" blog, make sure to keep this source http://shineforever.blog.51cto.com/1429204/1595776

Workaround for the "dfs.support.append" parameter is not supported for Hadoop 1.2. Version 1

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.