Bottle + WebUploader: Modify the Bottle framework to implement large file uploads. webuploaderbottle

Source: Internet
Author: User

Bottle + WebUploader: Modify the Bottle framework to implement large file uploads. webuploaderbottle

Bottle is a lightweight Web framework, which is small and powerful. The scalability is very good, and many functions can be extended, but some functions have to be modified by yourself.

Bottle: http://www.bottlepy.org/docs/dev/tutorial.html

BaiduWebUpLoader is a big file upload library that I think is very good at. It is compatible with IE6, Flash, and html5. For more information, see their project documentation.

BaiduWebUpLoader: https://github.com/fex-team/webuploader

BaiduWebUpLoaderYou can split a large file into N small POST requests for multipart transmission. These values are contained in the transmission process:

We willKey chunk Value. Of course, I suggest you read the webloader document if you have not read it. There is a detailed introduction, which is not described here.

Therefore, we need to modify the Bottle framework to automatically merge large file segments into one file each time, but it does not affect the upload of single/multiple small files.

 

Let's get started and make a small modification to the Bottle.

First, read the official Bottle documentation to see the simple File Upload. I believe this is not difficult for you.

When the file is last saved, the upload. save () function is used to save the file. Remember this function and we need to modify its source code.

We use request. files. gupet ('upload _ file') can obtain a FormsDict object. A FormsDict object is a collection of objects, because you cannot only upload one file at a time, so there is a set.

Like:

Let's follow up with the FileUpload class to see its definition.

This figure only needs to be understood. The self. file value is assigned by the outside world. We don't care about it here.

Looking at the next figure, you will find a welcome. The save Function is here, and we only need to make changes.

Here is the implementation of the save Function. The comment also describes the usage of each parameter.

Parameter: whether the destination overwrites the buffer size.

Next we need to change thisRaise IOError ('file exists .')Because only in this way can we append existing files and convert multiple POST requests into one file.

Ensure that the next file storage can be merged, rather than overwritten.

However, no POST data is found, and we cannot directly write with open (path, 'AB') as fp here. We need to see how it is implemented;

So let's take a look at the following _ copy_file method.

 

When I saw this function, I was very happy. This is not where I/O streams are all. This function reads a piece of file data from the POST request to the buffer and then writes it to the Target stream.

It seems that this can be used directly. You only need to add an append and do not re-write it, because the write here is the file pointer returned by open and is common.

So we changed it to the following:

We create a file pointer in AB mode. Use the existing _ copy_file function!

Note that I have added the section_upload parameter.When the value is True, the table is appended to the file.

In this way, the file POST packages can be merged (appended) into a file several times.

 

But it's over. Do you still remember the chunk value in the initial POST? When it is equal to 0, it indicates a new file, so we can avoid repeated append operations.

If this judgment is not added, it will cause future uploads to the existing file as long as the file name is the same, so we cannot.

 

In some cases, it is the business logic. You just need to understand the variable name.

Overwrite indicates whether to overwrite and section_upload indicates the append mode. Both cannot be True at the same time

In this way, you can use webloader to upload large files without worrying about errors in small files.

 

However, when you use it again, you will find a strange phenomenon, that is, any file that uploads a Chinese name will be filtered out.

I thought it was an encoding problem. Later I flipped through the source code and found that the Bottle internal Upload File name was filtered by myself. So if you don't need this function, you have to remove it.

If you do not need this function, You have to remove the code yourself. Ignore my comment.

However, you need to provide the file name filtering by yourself. By the way, you can also use the name value in the post request ..

 

 

Upload test:

We upload these files, and I believe it is sufficient to prove the reliability of large files.

Here is my uploader Configuration:

Here, the maximum number of concurrent jobs is corrected.It cannot be 1The modified chunk still supports multi-concurrency, because the chunk judgment is added. To be compatible.

However, we recommend that you set this parameter to 1, because unexpected events may occur on the real network.

Upload result:

All files are uploaded successfully, and no file is damaged.

However, I forgot to add small files and other format files. I tested them later and all of them are normal.

Webloader, a single (multiple) small file, does not segment it, but directly POST it. This modified file can also be supported.

 

 

After a small modification of the Bottle framework, we can upload large files. Of course, we use the built-in HTTP service.

If Apache is used together, I will talk about it again, but I have read the code and should also need to change it like this.

 

Possible risks:

Not tested in real networkIn the future. We have not tried it yet. The webloader document introduces the automatic retransmission loss part;

However, I cannot guarantee that when the file data is transmitted via POST, it will be broken in half, or a few bytes are missing, and the data has been written but cannot be regretted.

This will cause damage to the appended files even after re-upload.

 

 

This is just a bit of experience. If you have any methods to change or find my mistakes, please correct me. Thank you very much.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.