Eof # use the following hdfs-site.xmlcat> hdfs-site.xml
Dfs. replication
1
Eof # use the following mapred-site.xmlcat> mapred-site.xml
Mapred. job. tracker
Usd ip: 9001
Eof} # configure ssh password-free login function PassphraselessSSH () {# generate a private key without repeating it [! -F ~ /. Ssh/id_dsa] ssh-keygen-t dsa-p'-f ~ /. Ssh/id_dsa
follows:A, enter the Conf folder to modify the following file.Add the following to the hadoop-env.sh:Export Java_home = (JAVA installation directory)The contents of the Core-site.xml file are modified to the following:The contents of the Hdfs-site.xml file are modified to the following: (Replication default is 3, if not modified, datanode less than three will be error)The contents of the Mapred-site.xml file are modified to the following:B. Format th
dfsadmin-reportAppearLive Datanodes (2):This information indicates that the cluster was established successfullyAfter successful startup, you can access the Web interface http://192.168.1.151:50070 View NameNode and Datanode information, and you can view the files in HDFS online.Start YARN to see how tasks work through the Web interface: Http://192.168.1.151:8088/cluster command to manipulate HDFsHadoop FSThis command lists all the help interfaces fo
JPS is located at:/opt/jdk1.8.0_91/bin
$ cd/opt/jdk1.8.0_91/bin
$./jps
Successful startup will list the following processes: "Namenode", "Datanode" and "Secondarynamenode"
5. View HDFs information through the Web interface
Go to http://localhost:50070/to view
if http://localhost:50070/cannot be loaded, it may be resolved in the following way:
To perform namenode formatting first
$./bin/hdfs Namenode-forma
/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output 'dfs [a-z.] +'$ Cd output$ Cat *
Words in the statistics file:
$ Bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount input test$ Cd test/$ Cat *
5.
Chapter 2 mapreduce IntroductionAn ideal part size is usually the size of an HDFS block. The execution node of the map task and the storage node of the input data are the same node, and the hadoop performance is optimal (Data Locality optimization, avoid data transmission over the network ).
Mapreduce Process summary: reads a row of data from a file, map function processing, Return key-value pairs; the system sorts the map results. If there are multi
warranties or CONDITIONS of any KIND, either express or implied.# see the License forThe specific language governing permissions and# limitations under the license.# Start all Hadoop daemons. Run this on master node.Echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh" #这里说明了这个脚本已经被弃用了, we need to start with start-dfs.sh and start-yarn.sh. bin = ' #真正执行的是以下两个, that is, the execution of start-dfs.sh and start-yarn.sh two s
hadoop directory and format the hdfs file system. This operation is required when you first run hadoop,$ Cd/usr/local/hadoop/$ Bin/hadoop namenode-format2. Start bin/start-all.shGo to the bin directory, $./start-all.sh close: Same directory./stop-all.sh3. Check whether hadoop
/jdk1.8.0_91/bin$ cd/opt/jdk1.8.0_91/bin$./jpsIf successful, the following processes are listed: "NameNode", "DataNode", and "Secondarynamenode"5. View HDFs information through the Web interfaceGo to http://localhost:50070/to viewIf the http://localhost:50070/cannot be loaded, it may be resolved by the following method:First formatting of the execution Namenode$./bin/hdfs Namenode-formatWhen prompted to ent
-site.xml
Add the following content to
③
Vim/usr/local/hadoop/etc/hadoop/hdfs-site.xml
Add the following content to
④
Vim/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
in
⑤
Vim/usr/local/hadoop/etc/hadoop/slaves
will
, we have seen the program output result, which is correct. Therefore, this proves that the map-Reduce function is normal.
The above shows how to view file data through the HDFS File System of hadoop. This is natural, but if you want to view the file data on HDFS in hadoop from the perspective of the Linux File System, what is it like? For example:
Because data is stored in datanode in the hdfs file syste
master, slave1 and other IP to host under C:\Windows.1) Browse the network interfaces of Namenode and Jobtracker, their addresses by default:namenode-http://node1:50070/jobtracker-http://node2:50030/3) Use Netstat–nat to see if ports 49000 and 49001 are in use.4) Use JPS to view processesTo check if the daemon is running, you can use the JPS command (which is the PS utility for the JVM process). This command lists 5 daemons and their process identifi
on the master machine.2. Start the Distributed File servicesbin/start-all.shOrsbin/start-dfs.shsbin/start-yarn.shUse your browser to browse the master node machine http://192.168.23.111:50070, view the Namenode node status, and browse the Datanodes data node.Use your browser to browse the master node machine http://192.168.23.111:8088 See all apps.3. Close the Distributed File servicesbin/stop-all.sh4. File ManagementTo create the SWVTC directory in
is a standalone version, so you need to change to 1(4), Configuration Mapred-site.xmlModify the configuration file for MapReduce in Hadoop, configured with the address and port of Jobtracker4. Initialize HDFsBe sure to do this before executing the following command the contents of the extracted hadoop-1.0.4 folder are placed directly under/homeBin/hadoop Namenod
the front 4 plus datanode and journalnode a total of 6, Slave2 node should have JPS, Quorumpeermain, Datanode and Journalnode four services, Slave3 node should have JPS , Datanode and Journalnode three services.
"If all the Datanode nodes do not start, the other normal startup situation, the/opt/hadoop2/dfs/directory of each of your slave nodes to delete the data file, and then open the test. " 6, upload files
HDFs dfs-mkdir-p/usr/file #新建hdfs一个目录
hdfs dfs-put/home/
seen when installing the cluster environment. This is also our verification method, to see the number of launches through JPS.We can also through the browser URL: Host Name: 50070 View Namenode node, you can find that he is also webserver service, 50030 is map/reduce processing node.Resolve this warning: Warning: $HADOOP _home is deprecated.Add $hadoop_home_warn_suppress=1 to/etc/profile, this line of reco
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.