"MongoDB Tutorial 16th" Sharing no-sql development combat

Source: Internet
Author: User
Tags failover findone mongodb driver mongodb tutorial

A recent study of NoSQL has been compiled in the following directory:

First, the bottleneck of the relational database;

Second, the NoSQL overview;

Third, NoSQL in the popular database MongoDB introduction and installation configuration;

Four, MongoDB development mode and actual combat;

First, the bottleneck of the relational database

From the 90 's to the present, the relational database has played the most important role, its performance, scalability, stability, data backup and recovery mechanisms are very good, the relational database has become very mature, it provides users with a complete set of systems, including data storage, data backup and recovery, data encryption and decryption, Application development drivers, graphical configuration maintenance tools, security policies, and more. Figure 1 shows the use ratio of various databases in the world, from which we can clearly see its dominance.

Figure 1: Usage ratios for various databases

However, with the rapid development of information technology, Web development from Web1.0 to the Web2.0 era, the site began to develop rapidly, blog, e-commerce, Weibo, community and other Web applications have led the trend of the Times, their network traffic is very large, in order to solve this problem, Many IT companies have taken a series of optimization measures, the main optimization measures are as follows:

1, Cache+sql;

In order to improve the performance of the site, we often put some read access frequency is relatively high, the update frequency of low data storage in memory, on the one hand can improve the user experience, on the other hand can reduce the database access pressure;

2, read and write separation;

There is also a good way to read and write separation, for example, we can put the network application system generated by the symmetric data published to the Internet database, so that the Internet application access is read from the external network database, most of the intranet database is increased, deleted, changed and so on, so it can greatly improve the performance of the application;

3, sub-table sub-database;

With the high-speed development of Web2.0, in the case of the optimization of the Cache+sql, the database master-slave copy and read-write separation, the bottleneck of writing pressure in the main library of the relational database, the continuous explosion of data, and the high concurrency of access, the relational database will have serious locking problem. At this time began to popular sub-table sub-Library to alleviate the write pressure and the expansion of data growth problem, very early on I do an application system, there is this demand, with the data amount of precipitation, the database becomes very large, the database and log files reached 10 g, some tables have thousands data, the user in the process of using , the operation often will be stuck, sometimes just a few seconds or a few 10 seconds, the customer is very dissatisfied, and then we discussed after the database has been taken, a library for the library, some of the large data size of the table is split, such as one months to produce a table, as well as a table of the fields split into multiple tables medium;

Through the above optimization of the performance of our system will improve a large piece, the query rate per second can be achieved: Hundreds of QPS to thousands of QPS, the database size can reach about 1T, but with the increase in traffic and data, the relational database is difficult to continue to play a high efficiency, The use of sub-tables can reduce this bottleneck to a certain extent, but it reduces the scalability of the application, resulting in significant technical and development costs, such as a change in requirements, which may lead to a new sub-library table.

In the relational database basically will store some large text and the attachment information, causes the database very big, when does the database restores the time to be very slow, for example 3KB big text is nearly 30G the size, the 1.0002 million K attachment is 200G, If you can save these large text and large attachments from the relational database, our relational database will become small and easy to optimize.

In conclusion, the relational database is very powerful, but it is not very good to handle all the application scenarios. The poor extensibility of relational databases (which requires complex technology), large IO pressures under big data, and difficult table structure changes are the problems that developers currently face with relational databases.

Ii. Overview of NoSQL

1. What is NoSQL?

With the rapid development of web2.0, non-relational, distributed data storage has been developed rapidly, and they do not guarantee the acid characteristics of relational data. The concept of NoSQL was raised in 2009. The most common explanation for NoSQL is "non-relational", and "not only SQL" is also accepted by many people. (The term "NoSQL" was first used in 1998 as the name of a lightweight relational database.) )

NoSQL is the most used when the number of key-value storage, of course, there are other document-type, column storage, graph database, XML database, etc., see Figure 2. Before the NoSQL concept was introduced, these databases were used in a variety of systems, but were rarely used in web Internet applications.

Figure 2: Types of non-relational databases

2. What is the development status of NoSQL?

NoSQL is quite hot at the moment, the microblogging, e-commerce, blogs, communities, forums, and other big data high-volume Internet applications, such as the basic use of it, big it giants are in their own Internet architecture to add a nosql solution, even have their own nosql products, a variety of nosql products blossom , 3, in 2010 years after the NoSQL reached a blowout, which MongoDB development momentum is the most fierce and hottest.

Figure 3:nosql Development Trend

3. What is the relationship between NoSQL and relational databases?

I think the relational database and NoSQL is a closely related relationship, we need to choose the corresponding database according to the specific application scenario, if your application data volume is very small, then the relational database is sufficient, and performance, efficiency, stability, security are guaranteed If your scenario involves large amounts of data (large text, multiple attachments), such as magnitude hundreds of G or T, consider a relational database and a nosql approach, relational databases store relational data, and NoSQL databases store types of data like large text, objects, attachments, etc. This is an optimal solution;

Iii. Introduction to MongoDB in NoSQL and installation configuration

1. Concept

MongoDB is a high-performance, open-source, modeless document-based database that is a popular one in the current NoSQL database. It can be used in many scenarios to replace the traditional relational database or key/value storage methods. MONGO use C + + development, understand the aspect can refer to Figure 3: Figure 3:mongdb Internal composition MONGO official website address is: http://www.mongodb.org/; here is a book on MongoDB's introduction to the "MongoDB authoritative guide", This is in Chinese version.    2, the characteristics of the collection storage, easy to store the object type of data.    Mode of freedom.    Supports dynamic queries.    Supports full indexes, including internal objects.    Support Queries.    Supports replication and recovery.    Use efficient binary data storage, including large objects such as video.    Automatically handles fragmentation to support scalability at the cloud level.    Supports drivers for languages such as Python,php,java,c#,javascript.    The file storage format is Bson (an extension of JSON). can be accessed over the network.

3. function

Collection-oriented storage: suitable for storing objects and data in JSON form. Dynamic query: MONGO supports rich query expressions. Query directives use a JSON-style tag to easily query objects and arrays embedded in the document. Full index support: Includes embedded objects and arrays in the document. The query optimizer of MONGO parses the query expression and generates an efficient query plan. Query monitoring: MONGO contains a monitoring tool to analyze the performance of database operations. Replication and automatic failover: The MONGO database supports data replication between servers, supporting master-slave mode and inter-server replication. The primary goal of replication is to provide redundancy and automatic failover. Efficient traditional storage: support for binary data and large objects (such as photos or pictures) auto-sharding to support cloud-level scalability: Automatic sharding supports a level of database clustering, dynamically adding additional machines 4, using scene site data: MONGO is ideal for real-time inserts, updates and queries, And with the site of real-time data storage required for replication and high scalability. Caching: Because of its high performance, MONGO is also suitable as a caching layer for the information infrastructure. After the system restarts, the persistent cache layer built by MONGO can avoid overloading the underlying data sources. Large, low-value data: Storing some data in a traditional relational database can be expensive, and many times programmers often choose traditional files for storage. Highly scalable scenario: The MONGO is ideal for databases made up of dozens of or hundreds of servers. Built-in support for the MapReduce engine is already included in the roadmap for MONGO. Storage for objects and JSON data: The MONGO Bson data format is ideal for storing and querying in a document format. 5. Installation process

First step: Download the installation package: Official ← Click here, if the win system, note whether the 64-bit or 32-bit version, please select the correct version.

Second step: Create a new directory "D:\MongoDB", unzip the downloaded installation package, locate all the. exe files under the bin directory, and copy to the directory you just created.

Step three: Create a new "data" folder in the "D:\MongoDB" directory, which will be the root folder for data storage.

Configure the MONGO service side:

Open the cmd window and enter the command as follows:
> D:
> CD D:\MongoDB
> Mongod--dbpath D:\MongoDB\data

After the configuration is successful, you will see the example 4:

Figure 4: Launch Successful interface

Fourth step: Install as a Windows service

Mongod--install--servicename mongodb--servicedisplayname mongodb--logpath D:\MongoDB\log\MongoDB.log--dbpath d:\ Mongodb\data--directoryperdb, after the successful execution in the Windows service can see the name MongoDB service, open on it, so as to avoid the exe cmd command box trouble;

Four, MongoDB development mode and actual combat

1. Development mode

For MongoDB Development mode, we can use similar high-speed service framework HSF mode for architecture, as shown in Figure 5, first in the basic component layer we have the MongoDB driver encapsulated in the base Class library Inspur.Finix.DAL,

Then, in the domain layer, the small three layer architecture mode is used to invoke the data service of the base component layer, and the presentation layer can use the return JSON string to implement the function of the page by invoking the business layer in the form of the service through the Ajax+json+filter way.

Figure 5: Development mode

2, the development of actual combat

C # drivers have many more commonly used official drives and Samus drivers. The Samus driver supports LINQ-style manipulation of data in addition to general-form operations

(1) The basic component layer package we use the Samus driver for encapsulation, the code is as follows:

 public class Mongodbaccess:idisposable {//<summary>//Database aliases///</summary>        private string _databasealias = "Noah.mongodb";        <summary>///collection name///</summary> public string _collectionname {get; set;}        Define Mongo Service Private Mongo _mongo = null;        Gets databasename corresponding database, does not exist automatically creates private imongodatabase _mongodatabase = null;        Public mongocollection<document> mongocollection; <summary>///constructors///</summary>//<param name= "DatabaseAlias" ></param&gt        ; <param name= "CollectionName" ></param> public mongodbaccess (String DatabaseAlias, String collectionn            AME) {_databasealias = DatabaseAlias;            _collectionname = CollectionName;        Init ();       }///<summary>//Initialize//</summary> private void init () {Databaseconfigmanager DCM = databaseconfigmanager.create (); Gets the connection string connstr = DCM based on the alias.            Getprimaryconnection (_databasealias);            Split the conn StringTokenizer st = new StringTokenizer (ConnStr, ";"); String conn = St.            Getvaluebyindex (0);            Define MONGO Service _mongo = new MONGO (conn); _mongo.            Connect (); st = new StringTokenizer (St.            Getvaluebyindex (1), "="); String databaseName = St.            Getvaluebyindex (1); Gets the databasename corresponding database that does not exist if it is automatically created if (string. IsNullOrEmpty (databaseName) = = false) _mongodatabase = _mongo.            Getdatabase (DatabaseName); Gets the CollectionName corresponding collection, does not exist automatically creates mongocollection = _mongodatabase.getcollection<document> (_collectionname        ) as mongocollection<document>; }///<summary>///switch to the specified database///</summary>//<param name= "DbName" ></p    Aram>    <returns></returns> Public imongodatabase usedb (string dbName) {if (string.i            Snullorempty (DbName)) throw new ArgumentNullException ("DbName"); _mongodatabase = _mongo.            Getdatabase (DbName);        return _mongodatabase;        }///<summary>//Get the currently connected database///</summary> public imongodatabase CurrentDb {get {if (_mongodatabase = = null) throw new Exception ("The current connection has no Specify any database. Please specify the database name in the constructor or call the Usedb () method to switch the database.                ");            return _mongodatabase;  }}///<summary>///Get the specified collection of the current connection database "by type"///</summary>//<typeparam Name= "T" ></typeparam>///<returns></returns> public imongocollection<t> Getcoll Ection<t> () where T:class {return this.        Currentdb.getcollection<t> (); }        ///<summary>///Get the specified collection of current connection database "according to the specified name"///</summary>//<typeparam name= "T" ></ typeparam>//<param name= "Name" > Collection name </param>//<returns></returns> Pub Lic imongocollection<t> getcollection<t> (string name) where T:class {return this.        currentdb.getcollection<t> (name); }///<summary>//Use GRIDFS to save Accessories//</summary>/<param name= "Bytefile" >&            lt;/param>//<returns></returns> public string Gridfssave (byte[] bytefile) { string filename = Guid.NewGuid ().            ToString ();                Here the Gridfile constructor has an overload, and the bucket parameter is used to replace the default "FS" in the Create collection name.            Gridfile gridfile = new Gridfile (_mongodatabase); using (Gridfilestream Gridfilestream = gridfile.create (filename)) {Gridfilestream.write (bytefil            E, 0, bytefile.length);    }        return filename; }///<summary>//Read GRIDFS Accessories///</summary>//<param name= "filename" >&lt            ;/param>//<returns></returns> public byte[] Gridfsread (string filename) {            Gridfile gridfile = new Gridfile (_mongodatabase);            byte[] bytes; using (Gridfilestream Gridfilestream = gridfile.openread (filename)) {bytes = new Byte[gridfiles Tream.                Length]; Gridfilestream.read (bytes, 0, bytes.            Length);        } return bytes;            The public void Gridfsdelete (string filename) {gridfile gridfile = new Gridfile (_mongodatabase);        Gridfile.delete (New Document ("filename", filename));            }///<summary>//Resource release//</summary> public void Dispose () { if (_mongo! = null) {_mongo.               Dispose (); _mongo = null; }        }    }

(2) Domain layer part code

public class Knowledge_sockdal {public Knowledge_sockdal () {}//<summary> Save an object///</summary>//<param name= "model" ></param> public void Add (KNOWLEDG E_sock model) {try {using (mongodbaccess mm = new Mongodbaccess (Cconfig.noah _mongodb, "")) {mm. Getcollection<knowledge_sock> ().                Insert (model);            }} catch (Exception ex) {Exceptionmanager.handle (ex); }}///<summary>//Save Accessories//</summary>//<param name= "file" >< /param>//<returns></returns> public string Saveattach (byte[] file) {St Ring FileName = string.            Empty;          try {using (mongodbaccess mm = new Mongodbaccess (Cconfig.noah_mongodb, ""))      {fileName = mm.                Gridfssave (file);            }} catch (Exception ex) {Exceptionmanager.handle (ex);        } return fileName; }///<summary>//Read Accessories//</summary>//<param name= "FileName" ></para            m>//<returns></returns> public byte[] Readattach (string fileName) {try                    {using (mongodbaccess mm = new Mongodbaccess (Cconfig.noah_mongodb, "")) { Mm.                    Getcollection<knowledge_sock> (); return mm.                Gridfsread (FileName);            }} catch (Exception ex) {Exceptionmanager.handle (ex);        } return null; }///<summary>//Delete accessories//</summary>//<param name= "FileName" ></para M> PubLIC void Deleteattach (string fileName) {try {using (mongodbaccess mm = new M Ongodbaccess (Cconfig.noah_mongodb, "")) {mm.                    Getcollection<knowledge_sock> (); Mm.                Gridfsdelete (FileName);            }} catch (Exception ex) {Exceptionmanager.handle (ex); }}///<summary>//Update//</summary>//<param name= "model" ></ param> public void Update (Knowledge_sock model) {try {using (Mongo DBAccess mm = new Mongodbaccess (Cconfig.noah_mongodb, "")) {var query = new Document ("K Now_code ", model.                    Know_code); Mm. Getcollection<knowledge_sock> ().                Update (model, query); }} catch (Exception ex) {ExceptionManager.Handle (ex); }}///<summary>//delete//</summary>//<param name= "id" ></par am> public void Delete (string id) {try {using (mongodbaccess mm = New Mongodbaccess (Cconfig.noah_mongodb, "")) {var query = new Document ("Know_code", id)                    ; Mm. Getcollection<knowledge_sock> ().                Remove (query);            }} catch (Exception ex) {Exceptionmanager.handle (ex); }}///<summary>///For specific one///</summary>/<param name= "id" ><            /param>//<returns></returns> public Knowledge_sock FindOne (string id) {            Knowledge_sock Catalog = new Knowledge_sock (); try {using (mongodbaccess mm = new Mongodbaccess (Cconfig.noah_mongodb, ""))                {var query = new Document ("Know_code", id); Catalog = mm. Getcollection<knowledge_sock> ().                FindOne (query);            }} catch (Exception ex) {Exceptionmanager.handle (ex);        } return catalog; }///<summary>////</summary>//<param name= "JS" ></param&gt        ; <returns></returns> Public list<knowledge_sock> Find (string js) {list<k            nowledge_sock> catalogs = new list<knowledge_sock> ();                    try {using (mongodbaccess mm = new Mongodbaccess (Cconfig.noah_mongodb, "")) {                    String jsstr = @ "function () {return" + JS + ";}"; catalogs = mm. Getcollection<knowledge_sock> (). Find (Op.where (JSSTR)). Documents.tolIST ();            }} catch (Exception ex) {Exceptionmanager.handle (ex);        } return catalogs;        }//<summary>//Enquiry All//</summary>//<returns></returns> Public list<knowledge_sock> FindAll () {list<knowledge_sock> catalogs = new list<knowled            Ge_sock> ();                    try {using (mongodbaccess mm = new Mongodbaccess (Cconfig.noah_mongodb, "")) { catalogs = mm. Getcollection<knowledge_sock> (). FindAll ().                Documents.tolist ();            }} catch (Exception ex) {Exceptionmanager.handle (ex);        } return catalogs;        }///<summary>////</summary>//<param name= "JS" ></param>    <returns></returns>    public int GetCount (string js) {int count = 0;                    try {using (mongodbaccess mm = new Mongodbaccess (Cconfig.noah_mongodb, "")) {                    String jsstr = @ "function () {return" + JS + ";}"; Count = mm. Getcollection<knowledge_sock> (). Find (Op.where (JSSTR)).                Documents.count ();            }} catch (Exception ex) {Exceptionmanager.handle (ex);        } return count; } Public list<knowledge_sock> Find (string js, int pageSize, int pageIndex) {List<knowle            dge_sock> list = new list<knowledge_sock> ();                    try {using (mongodbaccess mm = new Mongodbaccess (Cconfig.noah_mongodb, "")) { String jsstr = @ "function () {return" +JS + ";}"; List = mm. Getcollection<knowledge_sock> (). Find (Op.where (JSSTR)). Documents.orderby (X=>x.know_createtime). Skip (PageSize * (pageIndex-1)). Take (pageSize).                ToList ();            }} catch (Exception ex) {Exceptionmanager.handle (ex);        } return list; }    }

"MongoDB Tutorial 16th" Sharing no-sql development combat

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.