[Copyright: The author of this article is original, reproduced please indicate the source]Article Source: http://www.cnblogs.com/sdksdk0/p/5585355.htmlJuppé Id:sdksdk0--------------------------------------------------In one of my previous blogs, I've shared the basic configuration of Hadoop, address: http://blog.csdn.net/sdksdk0/article/details/51498775, but that's used with beginners to learn and test, Tod
1, download hadoop-2.4.1.tar.gz from official website, my version is hadoop-2.4.1, can download in Http://pan.baidu.com/s/1cLAKCQ.2, decompression hadoop-2.4.1.tar.gz, using TAR-ZXVF hadoop-2.4.1.tar.gz-c app/, the app file is a file I created3, after decompression into the directory, CD/APP/
Ubuntu In the environment Eclipse the installation and Hadoop configuration of the pluginFirst, installation of EclipseIn Ubuntu desktop mode, click Ubuntu Software Center in the taskbar, search for Eclipse in the search barNote: The installation process requires the user password to be entered.Ii. Configuration of EclipseAfter Eclipse is installed, enter Whereis
64-bit Ubuntu configuration Hadoop needs to be compiled from the source code, although Ubuntu is installed in the virtual machine, can be loaded with a 32-bit, but now a little trouble to accumulate experience after the convenience. However, when writing this sentence, the compiled part is not finished yet. In any case, follow the usual habits and write as you go along:1.
First, install the JDK1. Download path: http://www.oracle.com/technetwork/java/javase/downloads/index.html2. Install to C:\Java\jdk1.8.0_121(do not install to the directory path with spaces, so Hadoop will not be found when looking for java_home)Second, configure the Java environment variables1,java_home : C:\Java\jdk1.8.0_1212,CLASSPATH : .; %java_home%\lib\dt.jar;%java_home%\lib\tools.jar;3,path : Add %java_home%\bin;%java_home%\jre\bin; to the fron
function returns null when an error occurs in host parsing. This causes NullPointerException to be thrown when the host string is used later. The code assigned to the host is java.net. URI. Parser. parseHostname (int, int ). If you are interested, take a look.
Comments are provided here for you to see.// Hostname = domainlabel [". "] | 1 * (domainlabel ". ") toplabel [". "] // domainlabel = alphanum | alphanum * (alphanum |"-") alphanum // toplabel = alpha | alpha * (alphanum | "-") alphanu //
value of an environment variable, and the value beyond this maximum number of layers will not be able to parseThen call the Configuration.main (null) method to execute the procedure as followsExecute firstStatic{...} Main load configuration file, AddResource () method adds resources for configurationIf it's an older version of Hadoop, load the configuration file
When installing the hadoop cluster today, all nodes are configured and the following commands are executed.
Hadoop @ name-node :~ /Hadoop $ bin/hadoop FS-ls
The Name node reports the following error:
11/04/02 17:16:12 Info Security. groups: group mapping impl = org. Apache. ha
). The default value is 500.
–max-uid–the Maximum Linux user ID that would be imported (exclusive). The default value is 65334.
–min-gid–the Minimum Linux group ID that would be imported (inclusive). The default value is 500.
–max-gid–the Maximum Linux group ID that would be imported (exclusive). The default value is 65334.
–check-shell–a Boolean flag to see if the users shell is set To/bin/false.
6 Ensure that the Hadoop user gro
1 Getting the default configuration
Configure Hadoop, which is primarily configured with Core-site.xml,hdfs-site.xml, Mapred-site.xml three profiles, by default, these profiles are empty, so it is difficult to know which configuration of these profiles can take effect, and the configuration on the Internet may not be
Configuring hadoop2.7.1 pseudo-distributed requires configuration of five filesFirst one: Vim hadoop-env.sh export java_home=/usr/java/jdk1.7.0_80The second one: Vim Core-site.xmlThe third one: Vim Hdfs-site.xmlFourth one: Vim mapred-site.xml (mv Mapred-site.xml.template mapred-site.xml)Fifth one: Vim Yarn-site.xmlAdd Hadoop to an environment variableVim/etc/prop
Hadoop yarn supports both memory and CPU scheduling of two resources (only memory is supported by default, if you want to schedule the CPU further and you need to do some configuration yourself), this article describes how yarn is scheduling and isolating these resources.In yarn, resource management is done jointly by ResourceManager and NodeManager, where the scheduler in ResourceManager is responsible for
Because the Hadoop cluster needs to configure a section of the graphical management data and later find Hue, in the process of configuring hue, you find that you need to configure HTTPFS because Httpfs,hue is configured to operate the data in HDFs.What does HTTPFS do? It allows you to manage files on HDFs in a browser, for example in hue; it also provides a restful API to manage HDFs1 cluster environmentUbuntu-14.10Openjdk-7hadoop-2.6.0 HA (dual nn)hu
5. Check the error
========================
1. Permission issues:
At startup, you may see a series of errors without access permissions.
Open hadoop-env.sh
Note the following:
Must have the read permission:Hadoop_conf_dir
You must have the write permission:Hadoop_log_dir, hadoop_secure_dn_log_dir, hadoop_pid_dir, hadoop_secure_dn_pid_dir
Hadoop_conf_dirIf you do not have the read permission, you cannot read the co
About ganglia configuration in Hadoop2.0.0-cdh4.3.0:
Modify configuration file: $ HADOOP_HOME/etc/hadoop/hadoop-metrics.propertiesAdd the following content:*. Sink. ganglia. class = org. apache. hadoop. metrics2.sink. ganglia. GangliaSink31*. Sink. ganglia. period = 10# Defa
The production environment of Hadoop cluster installation and configuration + DNS + NFS environment LinuxISO: CentOS-6.0-i386-bin-DVD.iso32 bit JDKversion: 1.6.0 _ 25-eaforlinuxHad ..
The production environment of Hadoop cluster installation and configuration + DNS + NFS environment LinuxISO: CentOS-6.0-i386-bin-DVD.is
HDFS super permission group is supergroup. the user who starts hadoop is usually superuser.
DFS. Data. dir
/Opt/data1/HDFS/data,/Opt/data2/HDFS/data,/Opt/data3/HDFS/data,...
Real datanode data storage path. Multiple hard disks can be written and separated by commas (,).
DFS. datanode. Data. dir. perm
755
The path permission of the local folder used by datanode. The default value is 755.
DFS. Replication
3
The Hadoop pseudo distribution mode is configured as follows:
Go to/home/tom/hadoop/conf, configure the Hadoop configuration file
Configuring hadoop-env.sh Files
Export java_home=/home/tom/jdk1.7.0_05Export path= $PATH:/home/tom/hadoop
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.