Build and test the hadoop environment

Source: Internet
Author: User
See http://blog.csdn.net/w13770269691/article/details/16883663/ cluster status:
[[Email protected] bin] # hdfs dfsadmin-report Configured Capacity: 36729053184 (34.21 GB) Present Capacity: 13322559491 (12.41 GB) DFS Remaining: 13322240000 (12.41 GB) DFS Used: 319491 (312.00 KB) DFS Used %: 0.00% Under replicated blocks: 0 Blocks with blocks UPT replicas: 0 Missing blocks: 0 Blocks Datanodes available: 2 (2 total, 0 dead) Live datanodes: name: 192.168.137.103: 50010 (slave2) Hostname: slave2 Decommission Status: Normal Configured Capacity: 18364526592 (17.10 GB) DFS Used: 45056 (44 KB) Non DFS Used: 11702558720 (10.90 GB) DFS Remaining: 6661922816 (6.20 GB) DFS Used %: 0.00% DFS Remaining %: 36.28% Last contact: Thu Nov 06 21:26:34 CST 2014 Name: 192.168.137.102: 50010 (slave1) Hostname: slave1 Decommission Status: Normal Configured Capacity: 18364526592 (17.10 GB) DFS Used: 274435 (268.00 KB) Non DFS Used: 11703934973 (10.90 GB) DFS Remaining: 6660317184 (6.20 GB) DFS Used %: 0.00% DFS Remaining %: 36.27% Last contact: Thu Nov 06 21:26:31 CST 2014
View the composition of file blocks:
[[Email protected] bin] # hdfs fsck/-files-blocks Status: HEALTHY Total size: 219351 B Total dirs: 11 Total files: 12 Total symlinks: 0 Total blocks (validated): 10 (avg. block size 21935 B) Minimally replicated blocks: 10 (100.0%) Over-replicated blocks: 0 (0.0%) Under-replicated blocks: 0 (0.0%) Mis-replicated blocks: 0 (0.0%) default replication factor: 1 Average block replication: 1.0 slave UPT blocks: 0 Missing replicas: 0 (0.0%) Number of data-nodes: 2 Number of racks: 1 FSCK ended at Thu Nov 06 21:27:34 CST 2014 in 29 milliseconds The filesystem under path '/' is HEALTHY [[email protected] bin] # [email protected] hadoop2.2] $
View node status: http: // 192.168.56.101: 50070 view the cluster running status on ResourceManager: http: // 192.168.56.101: 8088
If any problem occurs during environment setup, check the log path:/home/hadoop/hadoop2.2/logs.
After configuring HADOOP_HOME and making it take effect, perform the test. Start hadoop and create the input file in the/directory.
[[Email protected]/] # vim input: Enter the following content in the File: I am a very good person! I love you America! Upload the above to hdfs: [[email protected]/] # hadoop fs-put/input in the bin directory of hadoop: [[email protected] bin] #. /yarn jar .. /share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount/input/output after this execution can be seen: output folder has two files [[email protected] ~] # Hadoop fs-ls/output Found 2 items-rw-r -- 1 root supergroup 0 2014-11-06 21:21/output/_ SUCCESS-rw-r -- 1 root supergroup 64 2014-11-06 then you can view the Wordcount statistics: [[email protected] bin] # hadoop fs-cat/output/part-r-00000! 1 America 1 I 2 a 1 am 1 good 1 love 1 person! 1 very 1 you 1 [[email protected] bin] #

Build and test the hadoop environment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.