One day Learning _go language MgO (MONGO scene application)

Source: Internet
Author: User
Tags email string mongo shell
This is a creation in Article, where the information may have evolved or changed.

This article focuses on the use of MgO, only a simple introduction of MongoDB.

MongoDB Features


Mongdb Brief Introduction


注意:已经告知我们mongo不支持事务,在开发项目应用时,想要保证数据的完整性请考虑关系型数据库(经典例子银行转账)。mongo提供了许多原子操作,比如文档的保存,修改,删除等,都是原子操作。所谓原子操作就是要么这个文档保存到mongodb,要么没有保存到mongodb,不会出现查询到的文档不完整的情况。

About MgO

MgO is MongoDB's GO language driver pack.
MGO Official website: Http://labix.org/mgo

MgO use

MgO Scheme One

Package Mgoimport ("Flag" "Gopkg.in/mgo.v2" "Log" "Study/conf") Var session *mgo. Sessionvar database *mgo.       Databasefunc init () {/* config MongoDB josn file with the following configuration: {"hosts": "localhost", "Database": "User" }*/FileName: = flag. String ("config", "./conf/config.json", "Path to configuration File") flag. Parse () Config: = &conf. configurationdatabase{} config. Load (*filename) var err error Dialinfo: = &mgo. dialinfo{Addrs: []string{config. Hosts}, Direct:false, Timeout:time. Second * 1, poollimit:4096,///Session.setpoollimit}//Create a session session that maintains the socket pool, err = MgO. Dialwithinfo (dialinfo) if err! = Nil {log. Println (Err. Error ())} session. SetMode (MgO. Monotonic, True)//using the specified database = session. DB (config. Database)}func Getmgo () *mgo. Session {return session}func getdatabase () *mgo. Database {return database}func geterrnotfound () error {return MGO. ErrnotfOund} 

The session here can communicate with all the servers in the MongoDB cluster.

The session setting mode is:

    • Strong
      The session reads and writes to the primary server and uses a unique connection, so all read and write operations are fully consistent.
    • Monotonic
      The session's read operation starts with the other server (and through a unique connection), and whenever a write occurs, the session's connection is switched to the primary server. This shows that in this mode, some read operations can be scattered to other servers, but the read operation is not necessarily able to obtain the latest data.
    • Eventual
      Session reads are initiated to any other server, and multiple reads do not necessarily use the same connection, that is, the read operation is not necessarily orderly. Session writes are always initiated to the primary server, but may use different connections, that is, the write operation is not necessarily orderly.
Personal item part code type User struct {ID Bson. ObjectId ' Bson: "_id" ' UserName string ' Bson: "UserName" ' Summary string ' Bson: "Summary" ' Age I      NT ' Bson: "Age" ' Phone int ' bson: "Phone" ' PassWord string ' Bson: "PassWord" ' Sex int ' Bson: "Sex" ' Name string ' Bson: "name" ' Email string ' bson: ' email ' '}func Register (Password string, username string) (Err Error) {con: = MgO. Getdatabase (). C ("user")//can add one or more documents/* corresponding to the MONGO command line Db.user.insert ({username: "13888888888", Summary: "Code", Age:20,phon E: "13888888888"}) */err = con. Insert (&user{id:bson. Newobjectid (), Username:username, Password:password}) Return}func Finduser (UserName string) (User, error) {var us Er User con: = MgO. Getdatabase (). C ("user")//through Bson. M (is a map[string]interface{} type) for//conditional filtering to reach the purpose of the document query/* Corresponds to the MONGO command line Db.user.find ({username: "13888888888"}) */IF ERR: = Con. Find (Bson. m{"UserName ": username}). One (&user); Err! = Nil {if Err. Error ()! = MgO. Geterrnotfound (). Error () {return user, err}} return user, nil}

The

can make a single or full query through find (), and it can be paginated. The following is a simple code presentation:
con. Find (nil). Limit (5). Skip (0). All (&user)

Package Modelsimport ("Gopkg.in/mgo.v2/bson" "Study/library/mgo" "Time") type Diary struct {Uid bson.o Bjectid ' Bson: "UID" ' ID Bson. ObjectId ' Bson: "_id" ' Creattime time. Time ' Bson: "Creattime" ' UpdateTime time. Time ' Bson: ' UpdateTime ' ' title string ' Bson: ' title ' ' Content string ' Bson: ' content ' ' Mo od int ' bson: ' Mood ' ' pic []string ' Bson: ' pic '}//find this author article by UID and display the article author name Func finddiary (UID String) ([]interface{}, error) {con: = MgO. Getdatabase ().       C ("Diary")//One of the lookup functions can be implemented similar to the join operation in MySQL, easy to correlate the query. /* Corresponds to MONGO command line Db.diary.aggregate ([{$match: {uid:objectid ("58e7a1b89b5099fdc585d370")}}, {$lookup {from: "User",        Localfield: "UID", Foreignfield: "_id", as: "User"}, {$project: {"User.Name": 1,title:1,content:1,mood:1}}]). Pretty () */Pipeline: = []bson. m{Bson. m{"$match": Bson. m{"UID": Bson. Objectidhex (UID)}}, Bson. m{"$lookup": Bson. M{"from": "User", "Localfield": "UID", "Foreignfield": "_id", "as": "User"}, Bson. m{"$project": Bson. m{"User.Name": 1, "title": 1, "Content": 1, "Mood": 1, "Creattime": 1}},} pipe: = Con. Pipe (pipeline) var data []interface{} ERR: = pipe. All (&data) if err! = Nil {return nil, err} return data, Nil}func modifydiary (ID, title, content Stri NG) (err error) {con: = MgO. Getdatabase ().           C ("Diary")//Update/* Corresponds to MONGO command line Db.diary.update ({_id:objectid ("58e7a1b89b5099fdc585d370")}, {$set: {title: "Modify title", Content: "Modify Content", Updatetime:new Date ()}) */err = con. Update (Bson. m{"_id": id}, Bson. m{"$set": Bson. m{"title": Title, "Content": Content, "UpdateTime": Time. Now (). ADD (8 * time. Hour)}}) return}

There are many ways to update MgO, such as batch update con.UpdateAll(selector, update) , UPDATE or insert data con.Upsert(selector, update) .


Thought for a while

MGO Scheme II

Think: Session will be used globally, when in the actual program, we can open goroutine to handle each connection, multiple Goroutine can pass the session. Clone () to create or reuse a connection, using the session after completion. Close () To turn off this connection. When concurrency is high, it seems to improve efficiency.

The following parts of the code are modified:

Import ("Flag" "Gopkg.in/mgo.v2" "Log" "Study/conf") Var session *mgo. Sessionvar config *conf. Configurationdatabasefunc init () {filename: = flag. String ("config", "./conf/config.json", "Path to configuration File") flag. Parse () config = &conf. configurationdatabase{} config. Load (*filename) var err error Dialinfo: = &mgo. dialinfo{Addrs: []string{config. Hosts}, Direct:false, Timeout:time. Second * 1, poollimit:4096,//Session.setpoollimit} Session, err = MgO. Dialwithinfo (dialinfo) if err! = Nil {log. Println (Err. Error ())} session. SetMode (MgO. Monotonic, true)}type Sessionstore struct {session *mgo. session}//Gets the Collectionfunc (d * sessionstore) C (name string) *mgo of the database. Collection {return d.session.db (config. Database). C (name)}//creates a new Datastore object for each HTTP request func new Sessionstore () * Sessionstore {ds: = & sessionstore{Session:se Ssion. Copy (),} return Ds}func (d * sessIonstore) Close () {d.session.close ()}func geterrnotfound () error {return MGO. Errnotfound}

Changes were made to the lookup

func FindUser(username string) (User, error) {    var user User    ds := mgo.NewSessionStore()    defer ds.Close()    con := ds.C("user")    if err := con.Find(bson.M{"username": username}).One(&user); err != nil {        if err.Error() != mgo.GetErrNotFound().Error() {            return user, err        }    }    return user, nil}

MgO scheme One and two tests:
Use boom for concurrent testing, and sleep in each goroutine for 5 seconds, so that the connection is temporarily not released, you can see the MGO scheme two will continue to create a new connection, scenario one will not create a new connection. You can use the MONGO Shell's Db.serverstatus (). Connections to view the number of connections.

mgo方案一测试连接数:1000 并发:mongo 3个连接5000 并发:mongo 3个连接。

mgo方案二测试连接数:1000 并发:mongo 500多个连接5000 并发:mongo 1400多个连接。

Tip: MgO default connection pool is 4096, under high concurrency, if each session does not call Close (), will cause the number of connections will quickly reach 4096, and block other requests, so when using clone () or copy () session must use defer C Lose () Close the connection. Enabling the Maxpoollimit parameter restricts the total connection size when the connection exceeds the total limit of the current coprocessor wait until the connection can be created.

测试结果:mgo方案一和方案二在并发下,效率差不多。


Why


Possibility, due to the lack of data or the processing of individual MONGO can not see the effect.
Since the current project uses only one MONGO, it is later used by multiple MONGO or tested under a large amount of data. If you have any good suggestions, put forward to study and think.
Recommended Learning:
http://goinbigdata.com/how-to-build-microservice-with-mongodb-in-golang/
The official blog details the MgO concurrent processing, as follows:
Https://www.mongodb.com/blog/post/running-mongodb-queries-concurrently-with-go

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.