Hadoop shell command remote submission

Source: Internet
Author: User
Tags hadoop fs
Hadoop shell command remote submission I. Principle of hadoop shell command remote submissionHadoop shell command execution is usually performed through Linux Shell in many scenarios. It is also very painful for developers who are used to remote operations or Windows/Web operations. In the SRC \ test \ org \ apache \ hadoop \ CLI \ util directory of the hadoop installation package, the implementation of commandexecutor. Java may be helpful to you. The following is a hadoop dfsadmin command execution process. Public
Static IntExecutedfsadmincommand ( FinalString cmd,
FinalString namenode ){
Exitcode = 0;

Bytearrayoutputstream Bao =
NewBytearrayoutputstream ();
Printstream origout = system. out;
Printstream origerr = system. Err;

System. setout (NewPrintstream (BAO ));
System. seterr (NewPrintstream (BAO ));

Dfsadmin shell =
NewDfsadmin ();
String [] ARGs = getcommandasargs (CMD,
"Namenode", namenode );
Cmdexecuted = cmd;

Try{
Toolrunner. Run (shell, argS );
}Catch(Exception e ){
E. printstacktrace ();
Lastexception = E;
Exitcode =
-1;
}Finally{
System. setout (origout );
System. seterr (origerr );
}

Commandoutput = Bao. tostring ();

ReturnExitcode;
} In the initial stage, the standard output and error output stream of the current application are set using system. setout and system. seterr. The bytearrayoutputstream is used. After the dfsadmin shell is initialized, call the toolrunner. Run method to run the command parameters of argS. After the call is complete, set the standard output and standard error output mode to the default mode. The dfsadmin object is similar to the hadoop Shell Command's "hadoop dfsadmin" mradmin object. Similar to the hadoop Shell Command's "hadoop mradmin" fsshell object is similar to the hadoop Shell Command's "hadoop FS"2. Use the built-in jetty method to develop jetty servlet to implement a basic operation of submitting hadoop shell commands remotely on the web.1. design an HTML page to submit command parameters to the servlet, such as: 2. Compile the servlet program as follows:Printwriter writer = response. getwriter ();

Response. setcontenttype ("text/html ");
If(Request. getparameter ("select_type") = NULL ){
Writer. Write ("select is null ");
Return;
}
If(Request. getparameter ("txt_command") = NULL ){
Writer. Write ("command is null ");
Return;
}
String type = request. getparameter ("select_type ");
String command = request. getparameter ("txt_command ");
Bytearrayoutputstream Bao =
NewBytearrayoutputstream ();
Printstream origout = system. out;
Printstream origerr = system. Err;

System. setout (NewPrintstream (BAO ));
System. seterr (NewPrintstream (BAO ));
If(Type. Equals ("1 ")){
Dfsadmin shell =
NewDfsadmin ();
String [] items = command. Trim (). Split ("");
Try{
Toolrunner. Run (shell, items );
}
Catch(Exception e ){
E. printstacktrace ();
}
Finally{
System. setout (origout );
System. seterr (origerr );
}
Writer. Write (BaO. tostring (). replaceall ("\ n ",
"<Br> "));
}
Else
If(Type. Equals ("2 ")){
Mradmin shell =
NewMradmin ();
String [] items = command. Trim (). Split ("");
Try{
Toolrunner. Run (shell, items );
}
Catch(Exception e ){
E. printstacktrace ();
}
Finally{
System. setout (origout );
System. seterr (origerr );
}
Writer. Write (BaO. tostring (). replaceall ("\ n ",
"<Br> "));
}
Else
If(Type. Equals ("3 ")){
Fsshell =
NewFsshell ();
String [] items = command. Trim (). Split ("");
Try{
Toolrunner. Run (shell, items );
}
Catch(Exception e ){
E. printstacktrace ();
}
Finally{
System. setout (origout );
System. seterr (origerr );
}
Writer. Write (BaO. tostring (). replaceall ("\ n ",
"<Br> "));
} The above program is mainly used for simple processing of hadoop shells such as dfsadmin, mradmin, FS, and so on, and finally outputs the result of a simple test-report to the client using a string. Part of the screenshot is as follows: configured capacity:
7633977958400 (6.94 TB)
Present capacity: 7216439562240 (6.56 TB)
DFS remaining: 6889407496192 (6.27 TB)
DFS used: 327032066048 (304.57 GB)
DFS used %:
4.53%
Under replicated blocks:
42
Blocks with upt replicas:
0
Missing blocks: 0

-------------------------------------------------
Datanodes available:
4 (4 Total, 0 dead)

Name: 10.16.45.226: 50010
Decommission status: normal
Configured capacity:
1909535137792 (1.74 TB)
DFS used: 103113867264 (96.03 GB)
Non DFS used: 97985679360 (91.26 GB)
DFS remaining: 1708435591168 (1.55 TB)
DFS used %:
5.4%
DFS remaining %:
89.47%
Last contact: Wed MAR
21 14:37:24 CST
2012

The above code is developed using jetty embedded mode. The hadoop dependent jar and hadoop config files need to be loaded during runtime, as shown in the following figure:

#! /Bin/sh
Classpath = "/usr/local/hadoop/conf"
ForF
In$ Hadoop_home/hadoop-core-*. jar;
Do
Classpath =$ {classpath}: $ F;
Done
# Add libs to classpath
ForF
In$ Hadoop_home/lib/*. jar;
Do
Classpath =$ {classpath}: $ F;
Done
ForF
In$ Hadoop_home/lib/jsp-2.1/*. jar;
Do
Classpath =$ {classpath}: $ F;
Done

Echo$ Classpath
Java-CP
"$ Classpath: Executor. Jar" runserver

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.