Property>6 Configuration>5. Format HDFsIf this error occurs:ERROR Namenode. NameNode:java.io.IOException:Cannot Create Directory/home/xxx0624/hadoop/hdfs/name/currentThen: Set the directory permissions for Hadoop to the current user writable sudo chmod-r a+w/home/xxx0624/hadoop, granting write access to the Hadoop
Pre-language: If crossing is a comparison like the use of off-the-shelf software, it is recommended to use the Quickhadoop, this use of the official documents can be compared to the fool-style, here do not introduce. This article is focused on deploying distributed Hadoop for yourself.1. Modify the machine name[[email protected] root]# vi/etc/sysconfig/networkhostname=*** a column to the appropriate name, the author two machines using HOSTNAME=HADOOP0
sudo hdfs (enter), Su-hdfs (Enter),/etc/init.d/ha (enter), and/etc/init.d/hadoop-0.20-namenode start (enter).
The full name of the . fsck.
The full name is: File System Check.
How to check if the Namenode is running properly.
If you want to check if Namenode is working correctly, use the command/etc/init.d/hadoop-0.20-namenode status or simply JPS.
The role of the Mapred.job.tracker command.
You can let
, as follows:
tolibraryforplatformusingwhere applicable
This is because the $HADOOP in the installation package _home/lib/native/libhadoop.so.1.0.0 This local HADOOP library is compiled on a 32-bit machine and can be selected as follows:1. Ignore it, because it's just a warn, and it doesn't affect the functionality of Hadoop2. Worry that it will cause instability, download the
successfully formatted" and so on appear that the format is successful. Note: Each format will generate a namenode corresponding ID, after multiple formatting, if the Datanode corresponding ID number is not changed, run WordCount will fail to upload the file to input.
Start HDFs
start-all.sh
Show process
JPs
Enter http://localhost:50070/in the browser, the following page appears
Enter http://localhost:8088/, the following page appears
Indicate
h localhostAll the way to enter the password, SSH will be able to automatically log in later.-Configure HadoopEnter to ~/usr/hadoop-1.2.1/confCore-site.xml:localhost:9000Mapred-site.xml:localhost:9001Hdfs-site.xml:1hadoop-env.sh:Export java_home=/usr/local/lib/jdk1.7.0_71When configured as a single-machine pseudo-distributed, the host name is localhost and cannot be masterpc because there is no MASTERPC loopback address through ifconfig visible 127.0
Chapter 1 Meet HadoopData is large, the transfer speed is not improved much. it's a long time to read all data from one single disk-writing is even more slow. the obvious way to reduce the time is read from multiple disk once.The first problem to solve is hardware failure. The second problem is that most analysis task need to be able to combine the data in different hardware.
Chapter 3 The Hadoop Distributed FilesystemFilesystem that manage storage h
ObjectiveAt the beginning of 2014, we switched the online Hadoop 1 cluster to the Hadoop 2.2.0 stable, while deploying the security certification for Hadoop. This paper mainly introduces the implementation of the scheme of security authentication on Hadoop 2.2.0 and the corresponding solutions.Background cluster securi
libexec/hadoop-config.sh ".The hadoop genie process logs are recorded in the $ HADOOP_LOG_DIR folder. The default value is $ HADOOP_HOME/logs.3) view the NameNode web interface. The default value is:
-NameNode-http: // localhost: 50070/
4) Specify the HDFS folder used to execute MapReduce tasks
$ Bin/hdfs dfs-mkdir/user$ Bin/hdfs dfs-mkdir/user/
5) copy the inpu
Fill in the paths of the decompressed jdk folder and Hadoop folder respectively.
After saving, enter
Source/etc/profile
Make the profile take effect immediately.
Next, enter hadoop and press Enter. The corresponding command prompt is displayed.
Note that the hadoop running script is placed in the bin folder of hadoop
Starting the clustersbin/start-all.sh4 Viewing the cluster processJPs5 Administrator Run Notepad6 Local Hosts fileThen, save, and then close.7 Finally, it is time to verify that Hadoop is installed successfully.On Windows, you can access WebUI through http://djt002:50070 to view the status of Namenode, the cluster, and the file system. This is the Web page for HDFs.http://djt002:500708 new Djt.txt, used fo
Wang Jialin's in-depth case-driven practice of cloud computing distributed Big Data hadoop in July 6-7 in Shanghai
Wang Jialin Lecture 4HadoopGraphic and text training course: Build a true practiceHadoopDistributed Cluster EnvironmentHadoopThe specific solution steps are as follows:
Step 1: QueryHadoopTo see the cause of the error;
Step 2: Stop the cluster;
Step 3: Solve the Problem Based on the reasons indicated in the log. We need to clear th
the namenode, secondarynamenode, and jobtracker processes. The slave machine must include the datanode and tasktracker processes to ensure successful startup.
To stop, run $ hadoop_home/bin/stop-all.sh
HadoopQuery interface
Http: // master machine IP Address: 50070/dfshealth. jsp
Http: // master machine IP Address: 50030/jobtracker. jsp
HadoopCommon commands
Hadoop DFS-ls is to view the content in the/usr/
three configuration files are deployed on each slave node respectively;
7. Format a new Distributed File system:
$ bin/hadoop Namenode-format8. Run Hadoop
8.1 Start Hadoop Background daemon
$ bin/start-all.shAfter startup, you can view the Namenode and Jobtracker status from the following Web pages, at which point you can see the number of "Live Nodes" from
correctWeb modehttp://hd203:50070/dfshealth.jspNote that the local PC hosts file will also be configured192.168.0.203 hd203192.168.0.204 hd204192.168.0.205 hd205192.168.0.206 hd206192.168.0.202 hd202The Web can view the cluster status and job status, etc., so that the Hadoop installation is complete6 Installation Zookeeper (hd203)Tar zxvf zookeeper-3.3.3-cdh3u0.tar.gz-c/home/cbcloudOn the hd204-hd206.Mkdir
properly2147230256 Jps29793 DataNode29970 Secondarynamenode29638 NameNode30070 ResourceManager30231 NodeManager8. Open the http://localhost:50070/explorer.html Web page to view the Hadoop directory structure, indicating successful installationIv. installation of Spark1. Unzip the spark compression packTar xvzf spark.1.6.tar.gz2. Adding environment variablesVI ~/.BASHRCscala_home=/users/ysisl/app/spark/scal
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.