Mongodb--gridfs

Source: Internet
Author: User
Tags md5 mongodb unique id

The previous article mentions MongoDB's built-in Gridfs, which supports mass storage. So how is gridfs specifically stored? What's so special about it.

In the actual system development, there will always be the ability to upload pictures or files, these files may be large size. We can borrow Gridfs to assist management. GRIDFS Structure:

MONGODBGRIDFS consists of the table name, files, and the table name. Chunks, which contains the file information, the latter the contents of the file, and the two again through _ID and files_id established association.

. Files:

. Chunks:

A record in a Fs.files collection reads as follows:

{
    "_id": ObjectId ("58eb7864eb61ee19bcccb8b9"),//Unique ID
    "filename": "Toolbars.xml",//filename
    "Length": Numberlong (620),//File length
    "chunkSize": 262144,//chunk size
    "uploaddate": Isodate ("2017-04-10t12:19:47.632z"),/ /upload time
    "MD5": "aefbb40f9e349f2bf7caf32407cf6f6b",//File MD5 value
    "metadata": {
        "inserttime": "2017/4/10 20:19:46 ",
        " UserID ":" MJX "
    }                          //File other information
}

Corresponding to the chunk in Fs.chunks:

{
    "_id": ObjectId ("58eb7864eb61ee19bcccb8ba"),//chunk ID
    "files_id": ObjectId ("58eb7864eb61ee19bcccb8b9" ),//file ID
    "n": Chunk block of 0,//file, if the file is larger than chunksize, it will be split into multiple chunk fast
    "data": {"$binary": "", "$type": "00"}//File binary number The specific content is omitted here
}
file Save and read:

~ file is stored in the GRIDFS process, if the file is larger than chunksize, then the file is divided into multiple chunk (file fragments), generally 256k/, each chunk will be a document of MongoDB is present in the chunks collection, Finally, the file information is deposited into the fs.files.
~ Read the file, first according to the conditions of the query, find a suitable record in the Fs.files, get the value of "_id", and then according to this value to Fs.chunks to find all "files_id" as "_id" Chunk, and by "n" to sort, Finally, read the contents of the "data" object in chunk, and revert to the original file. Summary:

There is a size limit for data (documents) stored in the Mongodb,bson format, with a maximum of 16M. Gridfs is suitable for large file storage, just to solve this problem.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.