start Hadoop
Get in Hadoop-1.2.1/bin .
First format file system $./hadoop Namenode-format
run $./start-all.sh start Hadoop
To view a process with the JPS command
View cluster status $hadoop dfsadmin-report
you can run the./hadoop fs-ls command to view the root directory of Hadoop after it is started
MapReduce Application Case
WordCount
(1) First establish two input files on local disk FILE01 and FILE02
$echo "Hello World Bye" > File01
$echo "Hello Hadoop Goodbye Hadoop" > File02
(2) Create an input directory in HDFs
$./hadoopfs-mkdir input
(3) Copy file01 and FILE02 to HDFs
$./hadoopfs-copyfromlocal ~/hadoop-1.2.1/bin/file0* Input
(4) Execution WordCount
$./hadoopjar ~/hadoop-1.2.1/hadoop-examples-1.2.1.jar wordcount Input Output
(5) View output results
$./hadoopfs-cat output/part-r-00000
Test Mahout
1. Turn on Hadoop
Create test Catalog TestData on 2.hdfs, upload data
$HADOOP _home/bin/hadoop Fs-mkdir testdata
$HADOOP _home/bin/hadoop fs-put $MAHOUT _home/synthetic_control.data testdata
3. Using the Kmeans algorithm
Bin/hadoop Jar/usr/mahout-distribution-0.8/mahout-examples-0.8-job.jar Org.apache.mahout.clustering.syntheticcontrol.canopy.Job
4. Wait for the result to run and view
$HADOOP _home/bin/hadoop Fs-ls/user/wangnan/output