Java file multipart upload client source code

Source: Internet
Author: User

Mime (chinese ).doc

This blog introduces how to perform multipart upload of files. This article focuses on the client. For more information about the server, see the blog "Java file multipart upload Server Source Code". We recommend that you read the mime protocol before reading the code in this article.

Multipart upload does not physically block large files and then uploads them one by one. Instead, it reads part of the file streams of large files in sequence for upload. It is better to say that the traffic distribution is more effective. This document describes how to use Apache httpcomponents/httpclient to upload large files in multiple parts. The version used in the example is httpcomponents client 4.2.1.
This article only explains httpcomponents/httpclient multipart upload with a small demo function, without considering resource factors such as I/O shutdown and multithreading. Readers can handle this as appropriate based on their own projects.
The core idea and process of this article: Taking the size of 100 MB as an example, multipart upload is larger than 100 MB; otherwise, the whole upload is performed. For Files larger than 100 MB, the unit is 100 MB to ensure that each upload is not larger than 100 MB. For example, a 304 MB file is uploaded in four parts: 100 MB, 100 MB, 100 MB, and 4 MB. For the first time, read the 100 MB bytes starting from 0 bytes and upload the data. For the second time, read the 100th MB bytes starting from 100 mb and upload the data. For the third time, read the 200th MB starting from 100 MB.
The last 4 MB bytes are read for upload.

The custom contentbody source code is as follows, which defines the stream reading and output:

Package COM. defonds. rtupload. common. util. block; import Java. io. file; import Java. io. ioexception; import Java. io. outputstream; import Java. io. randomaccessfile; import Org. apache. HTTP. entity. mime. content. abstractcontentbody; import COM. defonds. rtupload. globalconstant; public class blockstreambody extends abstractcontentbody {// the two parameters for multipartentity: Private long blocksize = 0; // the size of this multipart upload private string filena Me = NULL; // Upload File Name // three private int blocknumber = 0 and blockindex = 0 Required by writeto; // Number of blocknumbers; blockindex current private file targetfile = NULL; // Private blockstreambody (string mimetype) {super (mimetype ); // todo auto-generated constructor stub}/*** custom contentbody constructor * @ Param blocknumber * @ Param blockindex the current number of blocks * @ Param targetfile the file to be uploaded */Public blockstreambody (INT blocknumber, int blockind Ex, file targetfile) {This ("application/octet-stream"); this. blocknumber = blocknumber; // blocknumber initializes this. blockindex = blockindex; // blockindexthis.tar GetFile = targetfile; // targetfile initializes this. filename = targetfile. getname (); // filename initialization // blocksize initialization if (blockindex <blocknumber) {// if it is not the last one, it is fixed this. blocksize = globalconstant. cloud_api_logon_size;} else {// the last part of this. blocksize = targetfile. len Aggregate ()-globalconstant. cloud_api_logon_size * (blocknumber-1) ;}@ overridepublic void writeto (outputstream out) throws ioexception {byte B [] = new byte [1024]; // temporary storage container randomaccessfile RAF = new randomaccessfile (targetfile, "R"); // reads data if (blockindex = 1) {// The first int n = 0; long readlength = 0; // record the number of read bytes while (readlength <= blocksize-1024) {// read most bytes here n = Raf. read (B, 0, 1024); readlength + = 1024; out. Write (B, 0, n);} If (readlength <= blocksize) {// The remaining less than 1024 bytes are read here n = Raf. read (B, 0, (INT) (blocksize-readlength); out. write (B, 0, n) ;}} else if (blockindex <blocknumber) {// neither the first nor the last Raf. seek (globalconstant. cloud_api_logon_size * (blockindex-1); // skip the first [number of blocks * fixed size] Byte int n = 0; long readlength = 0; // record the number of read bytes while (readlength <= blocksize-1024) {// read most of the bytes here n = Raf. read (B, 0, 1024); readlength ++ = 1024; out. write (B, 0, n);} If (readlength <= blocksize) {// The remaining less than 1024 bytes are read here n = Raf. read (B, 0, (INT) (blocksize-readlength); out. write (B, 0, n) ;}} else {// The Last Raf. seek (globalconstant. cloud_api_logon_size * (blockindex-1); // skip the first [block count * fixed size] Byte int n = 0; while (n = Raf. read (B, 0, 1024 ))! =-1) {out. write (B, 0, n) ;}/// do not forget to close out/RAF} @ overridepublic string getcharset () {// todo auto-generated method stubreturn NULL ;} @ overridepublic string gettransferencoding () {// todo auto-generated method stubreturn "binary" ;}@ overridepublic string getfilename () {// todo auto-generated method stubreturn filename ;} @ overridepublic long getcontentlength () {// todo auto-generated method stubreturn blocksize ;}}

Encapsulate multipart upload in the Custom httpcomponents/httpclient tool httpclient4util:

public static String restPost(String serverURL, File targetFile,Map<String, String> mediaInfoMap){String content ="";try {DefaultHttpClient httpClient = new DefaultHttpClient();HttpPost post = new HttpPost(serverURL +"?");httpClient.getParams().setParameter("http.socket.timeout",60*60*1000);MultipartEntity mpEntity = new MultipartEntity();List<String> keys = new ArrayList<String>(mediaInfoMap.keySet());Collections.sort(keys, String.CASE_INSENSITIVE_ORDER);for (Iterator<String> iterator = keys.iterator(); iterator.hasNext();) {String key = iterator.next();if (StringUtils.isNotBlank(mediaInfoMap.get(key))) {mpEntity.addPart(key, new StringBody(mediaInfoMap.get(key)));}}if(targetFile!=null&&targetFile.exists()){ContentBody contentBody = new FileBody(targetFile);mpEntity.addPart("file", contentBody);}post.setEntity(mpEntity);HttpResponse response = httpClient.execute(post);content = EntityUtils.toString(response.getEntity());httpClient.getConnectionManager().shutdown();} catch (Exception e) {e.printStackTrace();}System.out.println("=====RequestUrl==========================\n"+getRequestUrlStrRest(serverURL, mediaInfoMap).replaceAll("&fmt=json", ""));System.out.println("=====content==========================\n"+content);return content.trim();}

"File" is the name defined by the multipart upload server for the multipart file parameter. Careful readers will find that the entire file is uploaded directly using the Apache official inputstreambody, And the custom blockstreambody is used only for multipart upload.

Finally, call httpclient4util to upload:

public static Map<String, String> uploadToDrive(Map<String, String> params, String domain) {File targetFile = new File(params.get("filePath"));long targetFileSize = targetFile.length();int mBlockNumber = 0;if (targetFileSize < GlobalConstant.CLOUD_API_LOGON_SIZE) {mBlockNumber = 1;} else {mBlockNumber = (int) (targetFileSize / GlobalConstant.CLOUD_API_LOGON_SIZE);long someExtra = targetFileSize% GlobalConstant.CLOUD_API_LOGON_SIZE;if (someExtra > 0) {mBlockNumber++;}}params.put("blockNumber", Integer.toString(mBlockNumber));if (domain != null) {LOG.debug("Drive---domain=" + domain);LOG.debug("drive---url=" + "http://" + domain + "/sync"+ GlobalConstant.CLOUD_API_PRE_UPLOAD_PATH);} else {LOG.debug("Drive---domain=null");}String responseBodyStr = HttpClient4Util.getRest("http://" + domain+ "/sync" + GlobalConstant.CLOUD_API_PRE_UPLOAD_PATH, params);ObjectMapper mapper = new ObjectMapper();DrivePreInfo result;try {result = mapper.readValue(responseBodyStr, ArcDrivePreInfo.class);} catch (IOException e) {LOG.error("Drive.preUploadToArcDrive error.", e);throw new RtuploadException(GlobalConstant.ERROR_CODE_13001);// TODO}// JSONObject jsonObject = JSONObject.fromObject(responseBodyStr);if (Integer.valueOf(result.getRc()) == 0) {int uuid = result.getUuid();String upsServerUrl = result.getUploadServerUrl().replace("https","http");if (uuid != -1) {upsServerUrl = upsServerUrl+ GlobalConstant.CLOUD_API_UPLOAD_PATH;params.put("uuid", String.valueOf(uuid));for (int i = 1; i <= mBlockNumber; i++) {params.put("blockIndex", "" + i);HttpClient4Util.restPostBlock(upsServerUrl, targetFile,params);//}}} else {throw new RtuploadException(GlobalConstant.ERROR_CODE_13001);// TODO}return null;}

The map of Params encapsulates some parameters required for server multipart upload, and the number of uploaded parts is also determined here.
The examples in this article have been tested by myself to upload large files successfully. For example, if the files for *. MP4 are successfully uploaded, there is no problem. If you encounter a problem during the test, you cannot upload the file successfully. Please post a message after your blog and share your comments. There are still many shortcomings in the example in this article. If the readers find that they still need to leave a message to point out, I would like to thank you first.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.