MongoDB Create collection with PHP extension

Source: Internet
Author: User
Tags install mongodb git clone


Note content: MongoDB Create collection with PHP extension
Note Date: 2018-01-09







    • 21.30 MongoDB Create collections, data management
    • 21.31 PHP's MongoDB extension
    • 21.32 PHP MONGO Extensions




21.30 MongoDB Create collections, data management


To create a collection syntax:


Db.createcollection (name,options)


Name is the name of the collection and options are optional to configure the parameters of the collection.



For example, I want to create a collection called MyCol, with the following command:



> db.createCollection("mycol", { capped : true, size : 6142800, max : 10000 } )
{ "ok" : 1 }
> 






The above command creates a collection named MyCol, specifies the enable capping collection in the parameters, sets the size of the collection to 6.1428 million bytes, and sets the maximum number of files that the collection allows to be 10000.



The parameters of the configurable collection are as follows:


    • Capped True/false (optional) If true, the CAP collection is enabled. A cap set is a fixed-size collection that automatically overwrites the oldest entries when it reaches its maximum size. If True is specified, the dimension parameter also needs to be specified.
    • Autoindexid True/false (optional) If true, the default value for automatically creating an index _id field is false.
    • A size (optional) specifies a maximum size byte cap set. If the cap is true, then you also need to specify this field. Unit B
    • Max (optional) specifies the maximum number of files allowed in the capping collection.



Some other common commands for MongoDB:



The show Collections command can view collections, or you can use show tables:





> show tables
mycol
> show collections
mycol
>


The data structure of a collection is defined when inserting data:


// If the collection does not exist, insert data directly, mongodb will automatically create the collection
> db.Account.insert ({AccountID: 1, UserName: "test", password: "123456"})
WriteResult ({"nInserted": 1})
> show tables
Account
mycol
> db.mycol.insert ({AccountID: 1, UserName: "test", password: "123456"})
WriteResult ({"nInserted": 1})
>







Update Data command:





// $ set is an action. The following statement adds a new key named Age to the set, setting the value to 20
> db.Account.update ({AccountID: 1}, {"$ set": {"Age": 20}})
WriteResult ({"nMatched": 1, "nUpserted": 0, "nModified": 1})
>


To view all documents:





> db.Account.insert ({AccountID: 2, UserName: "test2", password: "123456"})
WriteResult ({"nInserted": 1})
> db.Account.find () // View all documents in the specified collection
{"_id": ObjectId ("5a5377cb503451a127782146"), "AccountID": 1, "UserName": "test", "password": "123456", "Age": 20}
{"_id": ObjectId ("5a537949503451a127782149"), "AccountID": 2, "UserName": "test2", "password": "123456"}
> 


You can query based on criteria, for example I want to specify an ID to view:





> db.Account.find({AccountID:1})
{ "_id" : ObjectId("5a5377cb503451a127782146"), "AccountID" : 1, "UserName" : "test", "password" : "123456", "Age" : 20 }
> db.Account.find({AccountID:2})
{ "_id" : ObjectId("5a537949503451a127782149"), "AccountID" : 2, "UserName" : "test2", "password" : "123456" }
>


To delete data based on criteria:


> db.Account.remove({AccountID:1})
WriteResult({ "nRemoved" : 1 })
> db.Account.find()
{ "_id" : ObjectId("5a537949503451a127782149"), "AccountID" : 2, "UserName" : "test2", "password" : "123456" }
> 







To delete a collection:





> db.Account.drop()
true
> show tables
mycol
>


To view the status of a collection:





> db.printCollectionStats()
mycol
{
    "ns" : "db1.mycol",
    "size" : 162,
    "count" : 2,
    "avgObjSize" : 81,
    "storageSize" : 32768,
    "capped" : true,
    "max" : 10000,
    "maxSize" : 6142976,
    "sleepCount" : 0,
    "sleepMS" : 0,
    "wiredTiger" : {
        "metadata" : {
            "formatVersion" : 1
        },
        "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
        "type" : "file",
        "uri" : "statistics:table:collection-0-4593892186656792650",
        "LSM" : {
            "bloom filter false positives" : 0,
            "bloom filter hits" : 0,
            "bloom filter misses" : 0,
            "bloom filter pages evicted from cache" : 0,
            "bloom filter pages read into cache" : 0,
            "bloom filters in the LSM tree" : 0,
            "chunks in the LSM tree" : 0,
            "highest merge generation in the LSM tree" : 0,
            "queries that could have benefited from a Bloom filter that did not exist" : 0,
            "sleep for LSM checkpoint throttle" : 0,
            "sleep for LSM merge throttle" : 0,
            "total size of bloom filters" : 0
        },
        "block-manager" : {
            "allocations requiring file extension" : 7,
            "blocks allocated" : 7,
            "blocks freed" : 1,
            "checkpoint size" : 4096,
            "file allocation unit size" : 4096,
            "file bytes available for reuse" : 12288,
            "file magic number" : 120897,
            "file major version number" : 1,
            "file size in bytes" : 32768,
            "minor version number" : 0
        },
        "btree" : {
            "btree checkpoint generation" : 261,
            "column-store fixed-size leaf pages" : 0,
            "column-store internal pages" : 0,
            "column-store variable-size RLE encoded values" : 0,
            "column-store variable-size deleted values" : 0,
            "column-store variable-size leaf pages" : 0,
            "fixed-record size" : 0,
            "maximum internal page key size" : 368,
            "maximum internal page size" : 4096,
            "maximum leaf page key size" : 2867,
            "maximum leaf page size" : 32768,
            "maximum leaf page value size" : 67108864,
            "maximum tree depth" : 3,
            "number of key/value pairs" : 0,
            "overflow pages" : 0,
            "pages rewritten by compaction" : 0,
            "row-store internal pages" : 0,
            "row-store leaf pages" : 0
        },
        "cache" : {
            "bytes currently in the cache" : 1290,
            "bytes read into cache" : 0,
            "bytes written from cache" : 437,
            "checkpoint blocked page eviction" : 0,
            "data source pages selected for eviction unable to be evicted" : 0,
            "eviction walk passes of a file" : 0,
            "eviction walk target pages histogram - 0-9" : 0,
            "eviction walk target pages histogram - 10-31" : 0,
            "eviction walk target pages histogram - 128 and higher" : 0,
            "eviction walk target pages histogram - 32-63" : 0,
            "eviction walk target pages histogram - 64-128" : 0,
            "eviction walks abandoned" : 0,
            "eviction walks gave up because they restarted their walk twice" : 0,
            "eviction walks gave up because they saw too many pages and found no candidates" : 0,
            "eviction walks gave up because they saw too many pages and found too few candidates" : 0,
            "eviction walks reached end of tree" : 0,
            "eviction walks started from root of tree" : 0,
            "eviction walks started from saved location in tree" : 0,
            "hazard pointer blocked page eviction" : 0,
            "in-memory page passed criteria to be split" : 0,
            "in-memory page splits" : 0,
            "internal pages evicted" : 0,
            "internal pages split during eviction" : 0,
            "leaf pages split during eviction" : 0,
            "modified pages evicted" : 0,
            "overflow pages read into cache" : 0,
            "page split during eviction deepened the tree" : 0,
            "page written requiring lookaside records" : 0,
            "pages read into cache" : 0,
            "pages read into cache requiring lookaside entries" : 0,
            "pages requested from the cache" : 2,
            "pages seen by eviction walk" : 0,
            "pages written from cache" : 4,
            "pages written requiring in-memory restoration" : 0,
            "tracked dirty bytes in the cache" : 0,
            "unmodified pages evicted" : 0
        },
        "cache_walk" : {
            "Average difference between current eviction generation when the page was last considered" : 0,
            "Average on-disk page image size seen" : 0,
            "Average time in cache for pages that have been visited by the eviction server" : 0,
            "Average time in cache for pages that have not been visited by the eviction server" : 0,
            "Clean pages currently in cache" : 0,
            "Current eviction generation" : 0,
            "Dirty pages currently in cache" : 0,
            "Entries in the root page" : 0,
            "Internal pages currently in cache" : 0,
            "Leaf pages currently in cache" : 0,
            "Maximum difference between current eviction generation when the page was last considered" : 0,
            "Maximum page size seen" : 0,
            "Minimum on-disk page image size seen" : 0,
            "Number of pages never visited by eviction server" : 0,
            "On-disk page image sizes smaller than a single allocation unit" : 0,
            "Pages created in memory and never written" : 0,
            "Pages currently queued for eviction" : 0,
            "Pages that could not be queued for eviction" : 0,
            "Refs skipped during cache traversal" : 0,
            "Size of the root page" : 0,
            "Total number of pages currently in cache" : 0
        },
        "compression" : {
            "compressed pages read" : 0,
            "compressed pages written" : 0,
            "page written failed to compress" : 0,
            "page written was too small to compress" : 4,
            "raw compression call failed, additional data available" : 0,
            "raw compression call failed, no additional data available" : 0,
            "raw compression call succeeded" : 0
        },
        "cursor" : {
            "bulk-loaded cursor-insert calls" : 0,
            "create calls" : 1,
            "cursor-insert key and value bytes inserted" : 164,
            "cursor-remove key bytes removed" : 0,
            "cursor-update value bytes updated" : 0,
            "insert calls" : 2,
            "modify calls" : 0,
            "next calls" : 0,
            "prev calls" : 1,
            "remove calls" : 0,
            "reserve calls" : 0,
            "reset calls" : 3,
            "restarted searches" : 0,
            "search calls" : 0,
            "search near calls" : 0,
            "truncate calls" : 0,
            "update calls" : 0
        },
        "reconciliation" : {
            "dictionary matches" : 0,
            "fast-path pages deleted" : 0,
            "internal page key bytes discarded using suffix compression" : 0,
            "internal page multi-block writes" : 0,
            "internal-page overflow keys" : 0,
            "leaf page key bytes discarded using prefix compression" : 0,
            "leaf page multi-block writes" : 0,
            "leaf-page overflow keys" : 0,
            "maximum blocks required for a page" : 1,
            "overflow values written" : 0,
            "page checksum matches" : 0,
            "page reconciliation calls" : 4,
            "page reconciliation calls for eviction" : 0,
            "pages deleted" : 0
        },
        "session" : {
            "object compaction" : 0,
            "open cursor count" : 1
        },
        "transaction" : {
            "update conflicts" : 0
        }
    },
    "nindexes" : 1,
    "totalIndexSize" : 32768,
    "indexSizes" : {
        "_id_" : 32768
    },
    "ok" : 1
}
---
>




21.31 PHP's MongoDB extension


PHP's official gives two MongoDB extensions, one is mongodb.so and the other is mongo.so. Mongodb.so is for new versions of PHP extensions, while mongo.so is for older versions of PHP extensions.



The following are the official reference documents for the two extensions:


https://docs.mongodb.com/ecosystem/drivers/php/


Since both the old and new versions of PHP are now in use, we need to know how to install the two extensions, first of all, to introduce the Mongodb.so installation method:
There are two ways to install mongodb.so, the first of which is to install via git:





[[email protected] ~] # cd / usr / local / src /
[[email protected] / usr / local / src] # git clone https://github.com/mongodb/mongo-php-driver
[[email protected] / usr / local / src / mongo-php-driver] # git submodule update --init
[[email protected] / usr / local / src / mongo-php-driver] # / usr / local / php / bin / phpize
[[email protected] / usr / local / src / mongo-php-driver] # ./configure --with-php-config = / usr / local / php / bin / php-config
[[email protected] / usr / local / src / mongo-php-driver] # make && make install
[[email protected] / usr / local / src / mongo-php-driver] # vim /usr/local/php/etc/php.ini
extension = mongodb.so // add this line
[[email protected] st / usr / local / src / mongo-php-driver] # / usr / local / php / bin / php -m | grep mongodb
mongodb
[[email protected] / usr / local / src / mongo-php-driver] # 


It's a bit slow to install because it's not very smooth at home, even on GitHub.



The second type is installed through the source package:





[[email protected] ~] # cd / usr / local / src /
[[email protected] / usr / local / src] # wget https://pecl.php.net/get/mongodb-1.3.0.tgz
[[email protected] / usr / local / src] # tar zxvf mongodb-1.3.0.tgz
[[email protected] / usr / local / src] # cd mongodb-1.3.0
[[email protected] /usr/local/src/mongodb-1.3.0]# / usr / local / php / bin / phpize
[[email protected] /usr/local/src/mongodb-1.3.0]# ./configure --with-php-config = / usr / local / php / bin / php-config
[[email protected] /usr/local/src/mongodb-1.3.0]# make && make install
[[email protected] /usr/local/src/mongodb-1.3.0]# vim /usr/local/php/etc/php.ini
extension = mongodb.so // add this line
[[email protected] /usr/local/src/mongodb-1.3.0]# / usr / local / php / bin / php -m | grep mongodb
mongodb
[[email protected] /usr/local/src/mongodb-1.3.0]#




21.32 PHP MONGO Extensions


The installation process is as follows:


[[email protected] ~] # cd / usr / local / src /
[[email protected] / usr / local / src] # wget https://pecl.php.net/get/mongo-1.6.16.tgz
[[email protected] / usr / local / src] # tar -zxvf mongo-1.6.16.tgz
[[email protected] / usr / local / src] # cd mongo-1.6.16 /
[[email protected] /usr/local/src/mongo-1.6.16]# / usr / local / php / bin / phpize
[[email protected] /usr/local/src/mongo-1.6.16]# ./configure --with-php-config = / usr / local / php / bin / php-config
[[email protected] /usr/local/src/mongo-1.6.16]# make && make install
[[email protected] /usr/local/src/mongo-1.6.16]# vim /usr/local/php/etc/php.ini
extension = mongo.so // add this line
[[email protected] /usr/local/src/mongo-1.6.16]# / usr / local / php / bin / php -m | grep mongo
mongo
mongodb
[[email protected] /usr/local/src/mongo-1.6.16]#


To test the MONGO extension:



1. First remove the user authentication from MongoDB and then edit the test page:





[[email protected] ~] # vim /usr/lib/systemd/system/mongod.service # remove --auth
[[email protected] ~] # systemctl daemon-reload
[[email protected] ~] # systemctl restart mongod.service
[[email protected] ~] # vim /data/wwwroot/abc.com/index.php # edit test page
<? php
$ m = new MongoClient (); # connection
$ db = $ m-> test; # Get the database named "test"
$ collection = $ db-> createCollection ("runoob");
echo "Collection created successfully";
?>


2. Access the test page:





[[email protected] ~] # curl localhost / index.php
Collection created successfully
[[email protected] ~] #


3. Go to MongoDB to see if the collection exists:





[[email protected] ~] # mongo --host 192.168.77.130 --port 27017
> use test
switched to db test
> show tables
runoob # Successful collection creation means no problem
> 


About PHP Connection MongoDB can refer to the following articles:


Http://www.runoob.com/mongodb/mongodb-php.html




Extending content


MongoDB Security Settings
http://www.mongoing.com/archives/631



MongoDB Execution JS Script
Http://www.jianshu.com/p/6bd8934bd1ca



MongoDB Create collection with PHP extension


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.