. Ssh/id_rsa.pub Hadoop@*.*.*.*:/home/hadoop/id_rsa.pub
Cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
Test Login:
SSH localhost or ssh *.*.*.*
K) Compiling
I. Download to the official website, I will not write
Ii. we've installed Hadoop in/usr/local/.
Tar zxvf hadoop-0.20.2.tar.gz
Ln-s
Namenode-format-clusterid MyhadoopclusterMyhadoopcluster is in the form of a string3. To delete the cache before each format Namenoderm-rf/home/hadoop/dfs/data/*rm-rf/home/hadoop/dfs/name/*4.Openstart-all.shShut downstop-all.shAccess method:http://hadoop1.localdomain:50070/dfsclusterhealth.jsphttp://hadoop1.localdomain:50070
/hadoop/tmp
All nodes
Fs.default.name
hdfs://192.168.1.20:9000
5.3.2.hdfs-site.xml# VI Hdfs-site.xml
Property name
Property value
Scope of involvement
Dfs.namenode.http-address
192.168.1.20:50070
All nodes
Dfs.namenode.http-bind-host
192.168.1.20
All nodes
1. The virtual machine installation hadoop,windows cannot access the Hadoop Web page http://master:50070/through the host name. Windows Ping Master also pings the method: Add Linux under Windows native C:\Windows\System32\drivers\etc\hosts files Hosts configure the hostname and IP address of the Hadoop machine to add i
special symbols will cause startup problems. Modify the/etc/hosts of the machine and add the ing between IP address and hostname.
2). Download and decompress the stable version of hadoop package and configure the Java environment (for Java environment, generally ~ /. Bash_profile, considering Machine security issues );
3). No key. Here is a small trick: On hadoopserver1
Ssh-kengen-t rsa-p'; press ENTER
Ssh-copy-ID user @ host;
Then ~ /. Copy id_rsa a
First, download the Hadoop websitehttp://hadoop.apache.orghttps://archive.apache.org/dist/hadoop/common/hadoop-2.6.0 Administrator Identity Decompression D:\Hadoop\hadoop-2.6.0Second, the download of winutilsAlso need to download Winutils.exe,requires a corresponding version
:50030 (Web page for MapReduce)http://localhost:50070 (HDFS Web page)Validation examples: Web page for MapReduceWeb pages for HDFsproblems encountered:1. When starting Hadoop, always say Java_home is not configuredWhen I use the shell command in the tutorial to execute bin/start-all.sh in the Hadoop folder, I always report java_home is not set.But I also set the
hadoop-x.x.xUnzip to the specified folder. Like/home/u.(2) configuration information for changing configuration files# vim ~/hadoop-1.2.1/conf/core-site.xml# vim ~/hadoop-1.2.1/conf/hdfs-site.xml# vim ~/hadoop-1.2.1/conf/mapred-site.xml(3) # ~/hadoop-1.2.1/bin/
gadget JPS.Note: The top two pictures show success!?
View cluster status with "Hadoop Dfsadmin-report"
?
Viewing a cluster from a Web page
Visit jobtracker:http://192.168.1.127:50030?Visit namenode:http://192.168.1.127:50070
The problems encountered and the solving methods
About Warning: $HADOOP _home is deprecated
, verify that Hadoop is successfully installed. Open your browser and enter the URL:
Http: // localhost: 50070/(HDFS Web page)
Http: // localhost: 50030/(MapReduce Web page)
If you can see it, it indicates that Hadoop has been installed successfully. For Hadoop, the installation of MapReduce and HDFS is required. Howev
change!): 10969 DataNode11745 NodeManager11292 SecondaryNameNode10708 NameNode11483 ResourceManager13096 Jps n.b. The old jobtracker have been replaced by the ResourceManager.
Access Web interfaces:
Cluster status:http://localhost:8088
HDFS status:http://localhost:50070
Secondary NameNode status:http://localhost:50090
Test Hadoop:hadoop jar ~/hadoop/share/
. Configuring Masters and slaves FilesAccording to the actual situation to configure the hostname of the Masters, in this experiment, the host name of the Masters main node is master,Then fill in the Masters file:In the same vein, fill in the Slaves file:Viii. replicate to each node HadoopTo replicate Hadoop to the Node1 node:To replicate Hadoop to the Node2 node:In this way, the node Node1 and node Node2 a
-site.xml:
This is the HDFS configuration in hadoop. The default backup mode is 3. In the standalone version of hadoop, you need to change it to 1.
6. CONF/mapred-site.xml:
This is the configuration file of mapreduce in hadoop, Which is configured with the address and port of jobtracker.
Note that if the version is installed earlier than version 0.20,
This article mainly analyzes important hadoop configuration files.
Wang Jialin's complete release directory of "cloud computing distributed Big Data hadoop hands-on path"
Cloud computing distributed Big Data practical technology hadoop exchange group: 312494188 Cloud computing practices will be released in the group every day. welcome to join us!
Wh
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.