(6) hadoop-based simple online storage application implementation 2

Source: Internet
Author: User
1. Call the hadoop API to upload, download, delete, and create directories and display files.

(1) Add necessary hadoop jar packages.

A、first decompress hadoop1.1.2.tar.gz to a disk.

B. Right-click the project and select build path... and build configure path;

C. Add the jar package in the hadoop1.1.2 folder;


There are also all jar packages in the Lib folder (note:Jasper-compiler-5.5.12.jarAndJasper-runtime-5.5.12.jarDo not introduce it; otherwise, an error will be reported)




Note: After the build path introduces these jar packages, you also need to copy these jar packages to the WEB-INF/lib directory, you can achieve through the following operations:

Select a project, right-click "properties", and select deployment assembly.

Click Add and select Java build path entries.


Select all the jar packages you just introduced and click finishes.



D. Create a Java Project

Create hdfsdao class:

Package COM. model; import Java. io. ioexception; import java.net. uri; import Org. apache. hadoop. conf. configuration; import Org. apache. hadoop. FS. filestatus; import Org. apache. hadoop. FS. filesystem; import Org. apache. hadoop. FS. path; import Org. apache. hadoop. mapred. jobconf; public class hdfsdao {// HDFS access address Private Static final string HDFS = "HDFS: // 192.168.1.104: 9000"; Public hdfsdao (configuration conf) {This (H DFS, conf);} public hdfsdao (string HDFS, configuration conf) {This. hdfspath = HDFS; this. conf = conf;} // HDFS path private string hdfspath; // hadoop System Configuration private configuration conf; // start function public static void main (string [] ARGs) throws ioexception {jobconf conf = config (); hdfsdao HDFS = new hdfsdao (CONF); // HDFS. mkdirs ("/Tom"); // HDFS. copyfile ("C: \ Files", "/WGC/"); HDFS. ls ("HDFS: // 192.168.1.104: 9 000/WGC/Files "); // HDFS. RMR ("/WGC/Files"); // HDFS. download ("/WGC/(.docx", "C :\\") in hadoopeclipseenvironment of Windows 3rd; // system. out. println ("success! ");} // Load the hadoop configuration file public static jobconf config () {jobconf conf = new jobconf (hdfsdao. class); Conf. setjobname ("hdfsdao"); Conf. addresource ("classpath:/hadoop/core-site.xml"); Conf. addresource ("classpath:/hadoop/hdfs-site.xml"); Conf. addresource ("classpath:/hadoop/mapred-site.xml"); Return conf;} // create the folder public void mkdirs (string folder) in the root directory) throws ioexception {Path = New Path (folder); fil Esystem FS = filesystem. Get (URI. Create (hdfspath), conf); If (! FS. exists (PATH) {FS. mkdirs (PATH); system. out. println ("create:" + folder);} FS. close ();} // file list of a folder public filestatus [] ls (string folder) throws ioexception {Path = New Path (folder); filesystem FS = filesystem. get (URI. create (hdfspath), conf); filestatus [] list = FS. liststatus (PATH); system. out. println ("ls:" + folder); system. out. println ("========================================== ============== ============ "); If (list! = NULL) for (filestatus F: List) {// system. out. printf ("Name: % s, folder: % s, size: % d \ n", F. getpath (), F. isdir (), F. getlen (); system. out. printf ("% s, folder: % s, size: % DK \ n", F. getpath (). getname (), (F. isdir ()? "Directory": "file"), F. getlen ()/1024);} system. out. println ("========================================== ============================== "); FS. close (); return list;} public void copyfile (string local, string remote) throws ioexception {filesystem FS = filesystem. get (URI. create (hdfspath), conf); // remote ---/file or folder FS under the user/user. copyfromlocalfile (New Path (local), new path (remote); system. out. println ("copy from:" + local + "to" + Remote); FS. close () ;}// delete a file or folder public void RMR (string folder) throws ioexception {Path = New Path (folder); filesystem FS = filesystem. get (URI. create (hdfspath), conf); FS. deleteonexit (PATH); system. out. println ("delete:" + folder); FS. close () ;}// download the file to the local system public void download (string remote, string local) throws ioexception {Path = New Path (remote); filesystem FS = filesystem. get (URI. create (hdfspath), conf); FS. copytolocalfile (path, New Path (local); system. out. println ("Download: From" + Remote + "to" + local); FS. close ();}}


Start hadoop before testing;

Run and test the program:

Other function tests are successful, so we will not list them here.



2. Combine the Web Front-end and hadoop API

Open the uploadservlet file and modify:

Package COM. controller; import Java. io. file; import Java. io. ioexception; import Java. util. iterator; import Java. util. list; import javax. servlet. servletcontext; import javax. servlet. servletexception; import javax. servlet. HTTP. httpservlet; import javax. servlet. HTTP. httpservletrequest; import javax. servlet. HTTP. httpservletresponse; import javax. servlet. JSP. pagecontext; import Org. apache. commons. fileupload. diskfi Leupload; import Org. apache. commons. fileupload. fileitem; import Org. apache. commons. fileupload. disk. diskfileitemfactory; import Org. apache. commons. fileupload. servlet. servletfileupload; import Org. apache. hadoop. FS. filestatus; import Org. apache. hadoop. mapred. jobconf; import COM. model. hdfsdao;/*** Servlet implementation class uploadservlet */public class uploadservlet extends httpservlet {/*** @ See http Servlet # doget (httpservletrequest request, httpservletresponse response) */protected void doget (httpservletrequest request, httpservletresponse response) throws servletexception, ioexception {This. dopost (request, response);}/*** @ see httpservlet # dopost (httpservletrequest request, httpservletresponse response) */protected void dopost (httpservletrequest request, httpservletresponse response) throws Servletexception, ioexception {request. setcharacterencoding ("UTF-8"); file; int maxfilesize = 50*1024*1024; // 50 m int maxmemsize = 50*1024*1024; // 50 m servletcontext context = getservletcontext (); string filepath = context. getinitparameter ("file-upload"); system. out. println ("source file path:" + filepath + ""); // verify that the uploaded content is of the string contenttype = request type. getcontenttype (); If (contenttype. in Dexof ("multipart/form-Data")> = 0) {diskfileitemfactory factory = new diskfileitemfactory (); // you can specify the maximum factory value for files stored in the memory. setsizethreshold (maxmemsize); // The locally stored data is greater than maxmemsize. factory. setrepository (new file ("C: \ Temp"); // create a new file upload handler servletfileupload upload = new servletfileupload (factory ); // set the maximum size of the uploaded file upload. setsizemax (maxfilesize); try {// parse the obtained file list fileitems = upload. parserequest (R Equest); // process the uploaded file iterator I = fileitems. iterator (); system. out. println ("begin to upload file to Tomcat server </P>"); While (I. hasnext () {fileitem Fi = (fileitem) I. next (); If (! Fi. isformfield () {// obtain the parameter string fieldname = Fi for the uploaded file. getfieldname (); string filename = Fi. getname (); string fn = filename. substring (filename. lastindexof ("\") + 1); system. out. println ("<br>" + FN + "<br>"); Boolean isinmemory = Fi. isinmemory (); long sizeinbytes = Fi. getsize (); // write the file if (filename. lastindexof ("\")> = 0) {file = new file (filepath, filename. substring (filename. lastindexof ("\\"))); // Out. println ("FILENAME" + filename. substring (filename. lastindexof ("\") + "|");} else {file = new file (filepath, filename. substring (filename. lastindexof ("\") + 1);} fi. write (File); system. out. println ("Upload File to Tomcat server success! "); <Span style =" color: # ff0000; "> system. out. println ("begin to upload file to hadoop HDFS </P>"); // uploads files on Tomcat to jobconf conf = hdfsdao on hadoop. config (); hdfsdao HDFS = new hdfsdao (CONF); HDFS. copyfile (filepath + "\" + FN, "/WGC/" + FN); system. out. println ("Upload File to hadoop HDFS success! "); </Span> request. getrequestdispatcher ("index. JSP "). forward (request, response) ;}} catch (exception ex) {system. out. println (Ex) ;}} else {system. out. println ("<p> NO File Uploaded </P> ");}}}

Start Tomcat server test:

Before upload, The WGC folder list under HDFS is as follows:


Next, upload the file: (4) Upload the file to the hadoopfile system by calling the hadoop Java api file. docx


On the Tomcat server, we can see the uploaded file:


Open http: // hadoop: 50070/to view the file system. You can see the newly uploaded file:


So here, a simple online storage upload function is implemented. Next we will do some art work on this simple online storage to make it look more beautiful.



References:

Http://blog.fens.me/hadoop-hdfs-api/











Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.