Hadoop configuration Item Grooming (core-site.xml)

Source: Internet
Author: User
Tags http authentication

Record the configuration and description of Hadoop, and use the new configuration items to be replenished and updated on a regular basis. Divide by Profile Name

Take the Hadoop 1.x configuration as an example

Core-site.xml

Name Value Description
Fs.default.name hdfs://hadoopmaster:9000 Defines the URI and port of the Hadoopmaster
Fs.checkpoint.dir /opt/data/hadoop1/hdfs/namesecondary1 Define the path to the name backup of Hadoop, the official document says read this, write Dfs.name.dir
Fs.checkpoint.period 1800 Defines the backup interval for name backup, in seconds, only for SNN, and the default one hour
Fs.checkpoint.size 33554432 Backup interval at log size interval, only valid for SNN, default 64M
Io.compression.codecs

Org.apache.hadoop.io.compress.DefaultCodec,
Com.hadoop.compression.lzo.LzoCodec,
Com.hadoop.compression.lzo.LzopCodec,
Org.apache.hadoop.io.compress.GzipCodec,
Org.apache.hadoop.io.compress.BZip2Codec
(Layout adjustment, actual configuration do not enter)

The codecs used by Hadoop, Gzip and bzip2 are self-contained, Lzo need to be installed HADOOPGPL or kevinweil, comma separated, snappy also need to be installed separately
Io.compression.codec.lzo.class Com.hadoop.compression.lzo.LzoCodec Compression encoder used by the Lzo
Topology.script.file.name /hadoop/bin/rackaware.py Rack-Aware Scripting location
Topology.script.number.args 1000 The number of hosts that the rack-aware script manages, the IP address
Fs.trash.interval 10800 HDFs dumpster settings, can be recovered by mistake deletion, number of minutes, 0 is disabled, add this item without restarting Hadoop
Hadoop.http.filter.initializers

Org.apache.hadoop.security.
Authenticationfilterinitializer
(Layout adjustment, actual configuration do not enter)

Need Jobtracker,tasktracker
Namenode,datanode, such as HTTP access port user authentication use, need to configure all nodes

Hadoop.http.authentication.type Simple | Kerberos | #AUTHENTICATION_HANDLER_CLASSNAME # authentication method, default is simple, can also define class, need to configure all nodes
Hadoop.http.authentication.
Token.validity
(Layout adjustment, actual configuration do not enter)
36000 To validate the token's effective time, configure all nodes
Hadoop.http.authentication.
Signature.secret
(Layout adjustment, actual configuration do not enter)
Default do not write parameters Default does not write automatically generate private signatures when Hadoop starts, all nodes need to be configured
Hadoop.http.authentication.cookie.domain Domian.tld HTTP authentication is used by the domain name of the cookie, IP address access is not valid, you must configure the domain name for all nodes.
Hadoop.http.authentication.
Simple.anonymous.allowed
(Layout adjustment, actual configuration do not enter)
true | False Simple authentication private, default Allow anonymous access, true

Hadoop.http.authentication.
Kerberos.principal
(Layout adjustment, actual configuration do not enter)

Http/[email protected] $LOCALHOST Kerberos authentication dedicated, certified entity machines must use HTTP as the name of K
Hadoop.http.authentication.
Kerberos.keytab
(Layout adjustment, actual configuration do not enter)
/home/xianglei/hadoop.keytab Kerberos authentication private, key file storage location
Hadoop.security.authorization True|false Hadoop Service level verification security verification, with Hadoop-policy.xml, configured to use Dfsadmin,mradmin-refreshserviceacl refresh in effect
Io.file.buffer.size 131072 The size of the read-write buffer used as a serialized file processing
Hadoop.security.authentication Simple | Kerberos Permission validation for Hadoop itself, non-HTTP access, simple or Kerberos
Hadoop.logfile.size 1000000000 Set the log file size to scroll the new log more than
Hadoop.logfile.count 20 Maximum number of logs
Io.bytes.per.checksum 1024 The number of bytes verified per checksum, not greater than io.file.buffer.size
Io.skip.checksum.errors true | False Skipping checksum error while processing serialized file, do not throw exception. False by default
Io.serializations

Org.apache.hadoop.io.
Serializer. Writableserialization

(Layout required.) Actual configuration do not enter)

Serialized codecs
Io.seqfile.compress.blocksize 1024000 Minimum block size of the serialized file for block compression, bytes
Webinterface.private.actions true | False Set to True, the tracker Web page of JT and NN will appear to kill the task to delete files such as Operation connection, default is False

Combined with the Apache manual and the actual configuration used in the company, according to the actual hardware configuration, the parameter size needs to be adjusted, the current parameters are based on Namenode 96G memory, Datanode 32G memory. Some har,s3,local such as FS implement because it is not very likely to use, so did not write.

The level is limited, the parameter description understanding or translation error forgive.

Hadoop configuration Item Grooming (core-site.xml)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.