The debug run in Eclipse and "run on Hadoop" are only run on a single machine by default, because in order to let the program distributed running in the cluster also undergoes the process of uploading the class file, distributing it to each node, etc.
A simple "run on Hadoop" just launches the local Hadoop class library to run your program,
No job information is visible on the Hadoop Cluster Web Management page (Http://192.168.2.2:8088/cluster/apps) because your job is not running on the cluster at all.
Package as a Jar method:
RM tmp/*
CP xmlparser_hadoop* tmp/
JAR-CVF xmlparser_hadoop.jar-c tmp/.
How to run:
hadoop:/usr/local/hadoop-2.6.0$ bin/hadoop jar Xmlparser_hadoop.jar Xmlparser_hadoop hdfs://192.168.2.2:9000/user/ Input Hdfs://192.168.2.2:9000/user/output/xmlparser
Appendix:
How to make your job truly distributed in a Hadoop cluster
Http://www.cnblogs.com/beanmoon/archive/2013/05/09/3068729.html
The difference between "run on Hadoop" in "Hadoop" Eclipse and packaged as a Jar submission task