Resumable upload of Html5 large files

Source: Internet
Author: User

Resumable upload of Html5 large files
Large file chunks are usually used on web servers to limit the size of data submitted to the server. If the file exceeds a certain size, the server returns a denial message. Of course, all web servers provide the size limit that may be modified by the configuration file. For large files uploaded to iis, you can modify the web server to limit the file size. However, this poses a security issue to the web server. Attackers can easily send a large packet and drag your web server to death. Currently, major implementation methods for uploading large files are segmented. For example, for a M file, split it into 50 blocks at 2 m. Then upload each file to the server in sequence, and then merge the files on the server after the upload is complete. To upload large files on the web, the core is mainly to implement file segmentation. Before the emergence of the Html5 File API, you must implement multipart File transfer on the web. Only flash or Activex is used to implement file segmentation. In Html5, we can directly use the file slice method to implement file segmentation. For example, file. slice (); and then asynchronously upload the file to the server through XMLHttpRequest. If you are interested and have time to upload a File library to Html5, you can use the html5 File API. I found the following two html5 class libraries on the Internet. Resumable. js: https://github.com/23/resumable.jsPludload http://plupload.com/ Resumable is a pure html5 upload class library. While Pludload supports html5, flash, silverlight, and html4, it automatically determines whether browsing supports html5 and does not support other upload methods. After testing, both resumable and Pludload support multipart upload of html5 files. The resumable is suitable for use. The following describes how to use resumable. Resumable. introduction to js Resumable upload: var r = new Resumable ({target: '/test/upload', chunkSize: 1*1024*1024, simultaneousUploads: 4, testChunks: true, throttleProgressCallbacks: 1, method: "octet"}); chunkSize: the size of the multipart file, in bytes. The number of processes that simultaneousUploads simultaneously uploads file blocks, multiple File blocks can be uploaded simultaneously. Whether the file block before testChunks sends the file information in get mode to check whether the file has been uploaded. Resumable upload is implemented through the testChunks configuration node. If it is set to true. Resumable will first send a get request, if the http status returns 200. The current block has been uploaded and the next get request is made. If the http status does not return 200, the current block data packet will be sent in post mode for file block upload. When testChunks is set to true, a get request is added for each upload. If we know the number of files before the last interrupted upload. Next time, you can upload the interrupted parts directly. In this way, an http get request can be reduced for each block. In response to this requirement, I modified the resumable source code and added a startchunkindex attribute to the file object in resumable. The default value is 0. It is used to set which part of the current file to start uploading. In this way, we only need to perform a query from the server before the file is uploaded (query which part of the current file is uploaded) and return the index of the last uploaded file block. Then, you can set the index value to the startchunkindex attribute of file to start uploading from the last disconnected file block. Call method: // Handle file add eventr. on ('fileadded', function (file) {file. startchunkindex = 0; // you can view the demo in the attachment to set the number of parts that the current file starts to upload. After all the file blocks are uploaded, the final task is to merge and save the files. The attachment is a server-side example of resumable multipart upload. net, including simple file merging. Demos in other languages can also be downloaded from resumable git. For the sake of simplicity, the demo only stores files on the local machine. In a real production environment. Generally, it should be placed on a separate file server (the front-end web is uploaded to the file server through ftp or folder sharing), and then the uploaded files are distributed to the image or processed (such as video compression ). Of course, it is best to have a distributed file system. At present, it is a good solution to put it in the Hadoop Distributed File System (HDFS.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.