./etc/hadoop/*.xml input//Put this step in their own operation when the error, pay attention to the log log, the cause of the error clusterid incompatibility problem, this time, re-close, and then remove the file, Re-format
Notice BASHRC inside the change, otherwise will error. The same configuration is also best set to the middle of the hadoop_env.sh (here I have not configured the following code snippet to the middle of hadoop_env.sh becau
Guardian[Dbrg@dbrg-1:hadoop] $bin/start-all.sh
Similarly, if you want to stop Hadoop, you[Dbrg@dbrg-1:hadoop] $bin/stop-all.sh
HDFs operationRun the Hadoop command in the bin/directory to view all supported operations and their usage by haoop, for example, with a few simple actions.
Create a Directory[Dbrg@dbrg-1:
-all.sh stops all hadoop* Start-mapred.shStart the MAP/reduce daemon. Including jobtracker and tasktrack* Stop-mapred.sh stops MAP/reduce daemon*Start-dfs.sh starts hadoop DFS daemon. namenode and datanode* Stop-dfs.shStop DFS daemon
Here, we simply start all the daemons[Dbrg @ dbrg-1: hadoop] $ bin/start-all.sh
Similarly, if you want to stop
This article mainly analyzes important hadoop configuration files.
Wang Jialin's complete release directory of "cloud computing distributed Big Data hadoop hands-on path"
Cloud computing distributed Big Data practical technology hadoop exchange group: 312494188 Cloud computing practices will be released in the group every day. welcome to join us!
Wh
to Use HDFS?
HDFS can be directly used after hadoop is installed. There are two methods:
One is imperative:
We know that there is a hadoop command in the bin directory of hadoop. This is actually a management command of hadoop. We can use this to operate on HDFS.
hadoop fs
hadoop
D. Disable the firewall and view the service iptables status first.
Service iptables stop
E) Check chkconfig -- list | grep iptables when the firewall is automatically started.
Disable Automatic Start firewall chkconfig iptables off
Verification: chkconfig -- list | grep iptables
F) SSH (Secure Shell) password-free Login
Verification: SSH localhost
G) install JDK
L open the directory CD/usr/local
L then delete all files Rm-RF *
L copy al
Pre-language: If crossing is a comparison like the use of off-the-shelf software, it is recommended to use the Quickhadoop, this use of the official documents can be compared to the fool-style, here do not introduce. This article is focused on deploying distributed Hadoop for yourself.1. Modify the machine name[[email protected] root]# vi/etc/sysconfig/networkhostname=*** a column to the appropriate name, the author two machines using HOSTNAME=HADOOP0
The full name of the list means that when we learn something, we have to know what it is, and when you know what it is, your mind will think of a lot of things to learn quickly.
1. Ls-a lists all documents under the document, including the following "." The beginning of the hidden file (Linux under the file hidden file is a. opening, if present. Represents the existence of a parent directory.
2. Ls-l list
function is to periodically merge the namespace image file of the metadata node with the modified log to prevent the log file from being too large. This will be believed in the narrative below. The merged namespace image file is also saved from the metadata node, which can be recovered when the metadata node fails.
Basic file Commands
The HDFs File System command takes the form:
Hadoop fs–cmd where cmd is a specific file command, is a variable set
The ls command is one of the most common commands in linux. the dir commands in ls and dos are used to list files in directories. let's take a look at the usage of ls.
Full name: List indicates the meaning of a List. when we learn something, we need to know why, when you know what this thing is, your mind will think of a lot of things to learn quickly.1.
/02/20 14:09:57 info namenode. fsnamesystem: isaccesstokenenenabled = false accesskeyupdateinterval = 0 min (s), accesstokenlifetime = 0 min (s) 12/02/20 14:09:57 info namenode. namenode: caching file namesoccuring more than 10 times
12/02/20 14:09:57 info common. storage: Image File of size 116 saved in 0 seconds.12/02/20 14:09:57 info common. storage: storage directory/data0/hadoop-name/namenode has been successfully formatted.12/02/20 14:09:57 info
test submission Job, like the document, is called Joe
[Root@com2 mr]# useradd Joe [Root@com2 mr]# passwd Joe [Root@com2 mr]# su Joe [joe@com2 mr]$ Export hadoop_mapred_home=/ usr/lib/hadoop-mapreduce [joe@com2 mr]# sudo-u hdfs hadoop fs-mkdir/user/joe [joe@com2 mr]# sudo-u HDFs Hadoop fs-cho WN Joe/user/joe [joe@com2 mr]$ h
successfully formatted" and so on appear that the format is successful. Note: Each format will generate a namenode corresponding ID, after multiple formatting, if the Datanode corresponding ID number is not changed, run WordCount will fail to upload the file to input.
Start HDFs
start-all.sh
Show process
JPs
Enter http://localhost:50070/in the browser, the following page appears
Enter http://localhost:8088/, the following page appears
Indicates that the pseudo-distribution installation c
Starting the clustersbin/start-all.sh4 Viewing the cluster processJPs5 Administrator Run Notepad6 Local Hosts fileThen, save, and then close.7 Finally, it is time to verify that Hadoop is installed successfully.On Windows, you can access WebUI through http://djt002:50070 to view the status of Namenode, the cluster, and the file system. This is the Web page for HDFs.http://djt002:500708 new Djt.txt, used for testing. Test with the WordCount program th
Chapter 1 Meet HadoopData is large, the transfer speed is not improved much. it's a long time to read all data from one single disk-writing is even more slow. the obvious way to reduce the time is read from multiple disk once.The first problem to solve is hardware failure. The second problem is that most analysis task need to be able to combine the data in different hardware.
Chapter 3 The Hadoop Distributed FilesystemFilesystem that manage storage h
it here. the directory I created is/usr/local/hadoop, copy the entire hadoop directory to this directory, and then it is in this form [root @ hadoopName hadoop] # cd/usr/local/hadoop [root @ hadoopName hadoop] # ls
Linux users should be familiar with the unique tips of the seven 'LS' commands.
In the previous two articles in our series, we have covered the vast majority of the content about the 'LS' command. The last part of the 'ls command 'series in this article. If you have not read the other two articles in this series, you can visit the following link.
15 basic '
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.