Baidu Webuploader large file multipart upload (received by. net ),
A few days ago, I wanted to upload a large file and find out that Webuploader is not bad. I will not go into details about her introduction.
I found a good multipart upload post in the garden. I took the first step after reference. This article records my experiences in this practice and only shares and discusses them.
For more information about plug-ins, see Quick Start Guide. After downloading the latest compressed package on Github, we made changes based on one of the examples (image-upload), mainly to supplement the implementation of. net background multipart receiving files.
First dry goods:WebUploadTest.zipExtraction code: fikn
The multipart upload logic has been implemented by controls. The logic for saving parts is:
Each time a file is uploaded, A guid is generated using js. See upload. js line 87.
GUID = WebUploader.Base.guid()
The guid above is used when webuploader configures parameters. Change the number of concurrent uploads by yourself.> 1. It seems that the test is acceptable (when I started testing with other code> 1. If there is an error, I will leave it for discussion)
The background generates a Temporary Folder Based on the front-end guid. The folder is named with the guid value. The part file is named after the number of parts and saved in the Temporary Folder. Fileupload. ashx 24 rows
// Obtain chunk and chunks
Int chunk = Convert. ToInt32 (context. Request. Form ["chunk"]); // The Order of the current part in the uploaded part (starting from 0)
Int chunks = Convert. ToInt32 (context. Request. Form ["chunks"]); // total number of chunks
// Create a temporary folder named by GUID
String folder = context. Server. MapPath ("~ /1/"+ context. Request [" guid "] + "/");
String path = folder + chunk; // rename each part with a number
The background returns a json string each time. The returned value can be customized just like the ajax response parameters. I returned and received it in this way. See lines 57 of fileupload. ashx
//...
Context. response. write ("{\" chunked \ ": true, \" hasError \ ": false, \" f_ext \ ": \" "+ Path. getExtension (file. fileName) + "\"} ");} else // directly save {context. request. files [0]. saveAs (context. server. mapPath ("~ /1/"+ DateTime. now. toFileTime () + Path. getExtension (context. request. files [0]. fileName); context. response. write ("{\" chunked \ ": false, \" hasError \ ": false }");}
//...
The received JavaScript code is as follows: upload. js 544 line
// The file is uploaded successfully and merged. Uploader. on ('uploadsuccess', function (file, response) {if (response. chunked) {$. post ("MergeFiles. ashx ", {guid: GUID, fileExt: response. f_ext}, function (data) {data = $. parseJSON (data); if (data. hasError) {alert ('file merging failed! ');} Else {alert (decodeURIComponent (data. savePath ));}});}});
During the upload, the file parts are saved in a folder named after the guid. After all files are uploaded, an asynchronous request is sentMergeFiles. ashxMerge files: Merge the files in the Temporary Folder in order of file names (the file name is a number ).
Run this Code. On the browser console, you can observe the events of File Uploading by the plug-in.
Webuploader supports resumable upload. However, due to the official website example, the resumable upload in this example cannot be stopped. Here, I will leave you with pleasure. The webuploader official website api has answers, which makes it easy to change. Haha