1. Start Hadoop. Then Netstat-nltp|grep 50070, if the process is not found, the port modification without configuring the Web interface is hdfs-site,xml with the following configurationIf you use the hostname: port number, go first to check the hostname under/etc/hosts IP, whether configured and your current IP is the same, and then restart Hadoop2. Now in the virtual machine to try to access hadoop002:
Install times wrong: Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project Hadoop-hdfs:an Ant B Uildexception has occured:input file/usr/local/hadoop-2.6.0-stable/hadoop-2.6.0-src/hadoop-hdfs-project/ Hadoop-hdfs/target/findbugsxml.xml
Slaves (Configure Datanode hostname, note remove localhost, otherwise the master itself will act as Datanode)sudo vi/etc/profile configuration hadoop_homeHadoop Namenode-formatStart Hadoop attemptSbin/start-dfs.sh may need to enter Yes to continue, note to return $ before you cansbin/start-yarn.shVerify startup success:/usr/jdkxxx/bin/jps (Java-related process statistics, Java process State)The Web interface accesses the http://10.0.0.11:
framework[hadoop@linux-node1 .ssh]$ /home/hadoop/hadoop/sbin/start-yarn.sh starting yarn daemons# View processes on the NameNode Nodeps aux | grep --color resourcemanager# View processes on DataNode nodesps aux | grep --color nodemanagerNote: start-dfs.sh and start-yarn.sh can be replaced by start-all.sh/home/hadoop/
is requiredDFS. Replication value is set to 1No other operations are required.
Test:
Go to the $ hadoop_home directory and run the following command to test whether the installation is successful.
$ mkdir input $ cp conf/*.xml input $ bin/hadoop jar hadoop-examples-*.jar grep input output ‘dfs[a-z.]+‘ $ cat output/*
Output:1 dfsadmin
After the above steps, if there is no error,
(node1, node2) machine.
After the node is started, datanode may not be connected. This is where DFS userd 50070 is displayed in http: // node1: 100%/dfshealth. jsp, and the number of live nodes is zero. This is to check whether the configuration items of localhost or host name corresponding to 127.0.0.1 exist in the/etc/hosts file of the masters and slaves. If yes, delete them, add your own actual IP address and host name pair (do not use localhost
;mapred.job.trackername>
value>localhost:9001value>
Property>
property>
name>dfs.replicationname>
value>1value>
Property>
configuration>
Namenode and Jobtracker status can be viewed via web page after launchnamenode-http://localhost:50070/jobtracker-http://localhost:50030/Test:Copying files to a distributed file system[Plain]View Plaincopyprint?
$ bin/hadoop
# #执行完之后, sometimes the tasktracker,datanode will open, so close thembin/hadoop-daemon.sh Stop Tasktrackerbin/hadoop-daemon.sh Stop DatanodeDelete the file in/tmp as a Hadoop user, save the file with no permissionsSu-hadoopBin/hadoop Namenode-formatbin/start-dfs.shBin/start-mapred.sBin/
Avatardatanode data nodes.
2. Start Avatarnode (Primary) under the Primary node Hadoop root directory
Bin/hadooporg.apache.hadoop.hdfs.server.namenode.avatarnode–zero
3. Start Avatarnode (Standby) under the Standby node Hadoop root directory
Bin/hadooporg.apache.hadoop.hdfs.server.namenode.avatarnode-one–standby
4. Start Avatardatanode in the Hadoop root directo
/
On gdy195
[Root @ gdy195/] # chown hduser. hduser/usr/gd/-R
[Root @ gdy195/] # ll/usr/gd/
The pseudo-distributed mode of hadoop has been fully configured.
Start the hadoop pseudo-distributed mode
Use the gdy192 host. Log on to the root user again.
Switch to hduser
Format hadoop's file system HDFS
[Hduser @ gdy192 ~] $ Hadoop namenode-format
Start
-2.6.0/logs/hadoop-hadoop-datanode-ocean-lab.ocean.org.outStarting secondary namenodes [0.0.0.0]The authenticity of host '0. 0.0.0 (0.0.0.0) 'can't be established.RSA key fingerprint is a5: 26: 42: a0: 5f: da: a2: 88: 52: 04: 9c: 7f: 8d: 6a: 98: 9b.Are you sure you want to continue connecting (yes/no )? Yes0.0.0.0: Warning: Permanently added '0. 0.0.0 '(RSA) to the list of known hosts.0.0.0.0: starting seco
Mapred-site.xml
Create a file in the directory, fill in the above content configuration Yarn-site.xml
start Hadoop
Execute First: Hadoop namenode-format
Then start hdfs:start-dfs.sh, if the Mac computer shows localhost port 22:connect refused, need to set-share-tick telnet, allow access to that add current user.
You will be asked to enter the password 3 times after executing start-dfs.sh.
Then: start-
; Property > property > name>Dfs.datanode.data.dirname> value>File:/usr/local/hadoop/tmp/dfs/datavalue> Property >configuration>If you need to change to non-distributed, then delete the modified content.Execute the following command to format the namenode (executed under the HADOOP2 directory)./bin/hdfs namenode -formatSeeing successfully formatted is a success.Execute the following command to open the daem
Hadoop Foundation----Hadoop Combat (vi)-----HADOOP management Tools---Cloudera Manager---CDH introduction
We have already learned about CDH in the last article, we will install CDH5.8 for the following study. CDH5.8 is now a relatively new version of Hadoop with more than hadoop2.0, and it already contains a number of
temporarily ignores the RPC server. The following describes the attributes of the HTTP server that can be used to define each HTTP Server: mapred. job. tracker. HTTP. addrss: the HTTP server address and port of jobtracker. Default Value: 0.0.0.0: 50030; mapred. task. tracker. HTTP. address: the HTTP server address and port of tasktracker. The default value is 0.0.0.0: 50060; DFS. HTTP. address: the HTTP server address and port of namenode. The default value is 0.0.0.0:
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.