MongoDB Learning Note Series One

Source: Internet
Author: User
Tags bulk insert mongodb client

First, Introduction and installation./bin/mongod--dbpath/path/to/database--logpath/path/to/log--fork--port 27017MongoDB takes up 3 of the disk space just after it starts up-4G around,--smallfiles ii. Basic Commands1, Login to MongoDB client/use/local/mongo 2, view current database show databases;        Show DBS; Two possible3, admin is and management of the library, local is to put the content of the schema, do not tamper4, switch to database use admin;        Show tables;            or show collections;    The table under Admin does not operate casually. 5, DB Help view database operations assistance Commands6, create a database: MONGO is implicitly created use shop; 7, CREATE TABLE [collection] Db.createcollection (' User '); 8, show DBS; 9, view table [collection] Show collections10, insert Data Db.user.insert ({name:lisi,age:22}); 11, view data: Db.user.find (); 12, Db.user.insert (id:3,name: ' Hmm ', hoby:[' basketball ', ' Fottball '],intro:{title: ' My intro '}); 13, collection is also created without displaying the14, Db.goods.insert (_id:1,name: ' N1 ', price:133); This shows that creating collection is not displayed15, delete collection Db.user.drop ();//Delete collection User16, Db.dropdatabase ();//Delete database;Third, CRUD operations1, add a single document Db.stu.insert ({sn:001,name, ' xiaoming '}); Overwrite the default ID db.stu.insert ({_id:002,sn:002,name: ' Lisi '}); 2, BULK Insert Db.stu.insert ({_id:3,sn:003,name:guangyu},{_id:4,sn:004, Name:zhangwei}); 3, insert document Db.stu.insert with unlimited depth ({name:x:' Li ', M: ' Shimin '},{jli: ' xxx '})    4, delete db.stu.remove ({sn:001}); To delete a document with SN 0015, delete all db.stu.remove (); 6, the query expression is a JSON object7, continue to delete db.stu.remove (gender:' m '); Db.stu.remove (Gender:' m ',true); True to restrict the deletion of only one row8, incorrect modification of db.stu.update ({name:' Zhangsan '},{name: ' Lisi '}); The previous expression is the action you want to modify, and the latter expression indicates what to change.    There is a problem with this modification, the new document replaces the old document directly, and the other properties are missing. 9, correct modification of db.stu.update ({name:' Poxi '},{$set: {name: ' Yanpoxi '}}); $set Modify $unset Reset1-->true0-->false$rename rename $inc grow db.stu.update ({name:' Wukong '}, {$set: {name:' Douzhanshengfo '}, $unset: {jinggu:1}, $rename: {set:gender}, $inc: {Age:2}            }            ); 10, Multi Modify multiline db.stu.update ({{Gender:' m '}, {gender:' Male '}, {multi:1}}) multiple means that multiple rows can be modified without modifying only one row in the matching multiline11, Upsert $setonInsert db.stu.update ({name:' Wuyong '}, {$set: {name:' Junshiwuyong '}}            ); If this person is not wuyong, then no modification will be made Db.stu.update ({name:' Wuyong '}, {$set: {name:' Junshiwuyong '}}, {upsert:true}            ); If you add Upsert to True, then auto-insert but this still has a problem, the original field will be lost this time need Upsert Setoninsert db.stu.update ({name :' Wuyong '}, {$set: {name:' Junshiwuyong '}}, {upsert:true}, {$setOnInsert: {gender:' Male '}}//Add the original field            ); 12, query query all documents all content db.stu.find (); Query the gender property of all documents Db.stu.find ({},{gender:1}) queries the gender property of all documents and does not query the _id property Db.stu.find ({},{gender:1,_id:0}) IV: Drill-down query expression1: The simplest query expression {Filed:value} refers to a document that queries the value of the field column2: $ne---! =query Expression {field:{$nq: value}} effect--check that the value of filed column is not equal to the document of value $GT--Greater than $lt--less than $lte less than or equal to $gte greater than or equal3: $nin--notinch{$nin [{cat_id:{$ne:30}}, {cat_id:{$eq:300}}                ]            }        4: $all syntax: {field:{$all: [V1,v2 ...]}} Refers to the Remove field column is an array and contains at least V1,V2 values5, $or $and {$and: [{shop_price:{$gte:30}}, {shop_price:{$lte:300} }]} {$or [{shop_price:{$ne:30}}, {shop_price:{$eq:300}}                ]            }        6: $nor, {$nor, [condition 1, Condition 2]} refers to the true return of documents that are not satisfied by all conditions7, modulo $mod {goods_id:{$mode: [5,0]},}, {goods_id:1, Goods_name:1, _id:0            }        8, $exists existence of a column is found to be {age:{$exists:1}}, {goods_id:1, Goods_name:1, _id:0            }        9, $type query by type {age:{$type:2}            }          10, $where Example: Db.goods.find ({$where:' This.cat_id! = 3 && this.cat_id! = 11 '}); Directly converts binary data to objects, and any properties of the object can be navigated and extremely inefficient. Not recommended, the advantage is that expressions can be very complex and flexible to write11, using regular expressions to query the product example starting with "Nokia": Db.goods.find ({goods_name:/Nokia. */},{goods_name:1}); Iv. cursor Operation>varMyCursor = Db.bar.find ({_id:{$lte: 5}})        > while(Mycursor.hasnext ()) {... printjson (Mycursor.next ()); ... }    Five, user management six, import and export seven, replica set if the primary server automatically shut down Rs.shutdownserver (); automatically switches to another server as the primary server to write an automated installation script for the MONGO replica set: Refer to the MONGO replica set installation script. Sh VIII, Shard shard configuration1, start two MONGO servers mkdir-p/home/m17/home/m18/home/m20/home/m30/home/Mlog Mongd--dbpath/home/m17--logpath/home/mlog/m17.log--fork--port 27017 mongd--dbpath/home/m18--logpath/home/mlog/m 18.log--fork--port 27018 2, configure Configserver, manage meta information mongd--DBPATH/HOME/M20--logpath/home/mlog/m20.log--fork--port 27020--configserver 3, configure a MONGO route MONGOs--logpath/home/mlog/m30.log--port 30000--configdb 192.168.1.202:27020--fork 4, connection routing Mongo--port 30000 5, increase the slice node Sh.addshard in the routing node (' 192.168.10.202:27017 ') Sh.addshard (' 192.168.10.202:27018 ')        6, in the routing node to see the route status, the Discovery Route slice node information sh.status (); 7, inserting data in the routing node8, the routing node has data, found that 17 has data, 18 no data. Because there is no reason to join the routing rule9, Sh.status can see the database, partition false, priority is placed in the shard000010, enable Shard Sh.enableshard for the database (' Shard '); 11, Sh.status can see the database shop, partition False, priority is placed in the shard000012, adding a shard to the table sh.shardcollection (' Shop.goods ', {good_id:1}); Use good_id as a tablet key13, view sh.status, you can see the reality of the current Shard information falls first in shard000114, inserting 3W data15, view sh.status, see the data fall on the shard000116, and then insert 10W data to see if the data falls on the shard000117, the reasons for the above questions are as follows: MongoDB is not from a single document level, the absolute average scattered on each piece, but the N document, form a block"Chunk", priority on a piece, when this piece of chunk, than the chunk of another film, the difference is relatively large, (>=3), the chunk on the film will be moved to another film, in order to chunk units, maintenance of data equalization between slices Q: Why 100,000 data inserted, only 2 chunk?A : Note that chunk is larger (by default, 64M) in the Config database, modify the value of chunksize. Q: Since the priority to insert on a piece, when the chunk imbalance, then move the chunk, naturally, with the increase in data, shard between instances, there is chunk to move back and forth phenomenon, which will bring about the problem?A: The increase of IO between servers, then asked: can I define a rule, a certain n data form 1 blocks, the pre-assigned m chunk, m chunk trailer assigned on different slices. Future data directly into their pre-allocated chunk, no longer moving back and forth?Answer: Yes, manual pre-shard! 18, in the Config database, modify the chunk use config;        Show tables;        Db.settings.find (); Db.settings.save ({_id:chunksize},{$set: {value:1}})    19, view shard information Sh.status (); 20, continue inserting more data 15W it can be 50W .21st, view shard information Sh.status (); 22, you can see that the number of shards of two pieces of nodes roughly balances manual pre-sharding:1, the user table Shard under the Shop library, the tablet key is UserID sh.shardingcollection (' Shop.user ', {userid:1})        2, Sh.status (); 3, Simulation: 40 blocks per block, 1000 pieces of data4, define rules for(vari = 1;i<=40;i++) {Sh.splitat (' Shop.user ', {userid:i*1000}); There will be 40 chunk, and these chunk will be distributed evenly across the nodes, so the insertion will not cause chunk to move frequently .5, Quick View shard information sh.status (); 6, inserting 4W data for(vari = 0;i<40000;i++) {Db.user.insert ({userid:i,name:' nnnn Hello '}); }        7, view total data for two shard nodes, each 2 W data ix, Shard and replica set mate B--Configserver C--Replica set D--Replica set1, creating replica sets C, D, respectively, using script Automation to create2, replica set c,d can be considered a machine, the Shard node is set to C, D's master node can be3, configure Configserver, manage meta information mongd--DBPATH/HOME/M20--logpath/home/mlog/m20.log--fork--port 27020--configsvr 4, configure a MONGO route MONGOs--logpath/home/mlog/m30.log--port 30000--configdb 192.168.1.202:27020--fork 5, increase the slice node Sh.addshard in the routing node (' rs3/192.168.10.203:27017 ') #假设203的27017就是C的主节点 Sh.addshard (' rs4/192.168.10.203:27018 ') #假设203的27017就是D的主节点6, add library sh.enablesharding that require sharding (' Shop '); 7, add a table sh.shardcollection that requires sharding (' Shop.user ', {userid:1}); 8, Manual shard test Sh.splitat (' Shop.user ', {userid:1000}) #1千条数据拆分一下 Sh.splitat (' Shop.user ', {userid:2000}) #两千条数据后拆分一下 Sh.splitat (' Shop.user ', {userid:3000}) #3千条数据后拆分一下9, inserting Data for(vari = 0;i<4000;i++) {Db.user.insert ({userid:i,intro:' I am Lili '}); }            10, view the data volume for two replica sets including the master node and the slave node11, removing a shard use admin db.runcommand ({removeshard:"Rs2"})        12, query the information on the Shard Sh.status (); You can see that the data that was removed from the Shard is moved to another shard and is removed from the Shard marked"Draining":true13, the removed shard recovery db.shards.update ({"_id": "Rs2"},{$unset: {draining:true}},false,true) You can see the data of the shards moving to the restored shards14, ten, web site conversion Project1, function Description*Enter URL*Click to shorten*generate a relatively short URL*If a short URL already exists, the hint already exists*need to store data original URL short URL2, short URL generation3, PHP connection mongodb compilation extension

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.