MongoDB Introductory article

Source: Internet
Author: User
Tags bulk insert emit modifier mongodb client prev

0) MONGO: Connect MongoDB database via client

1) Show DBS---Display the currently created database

2) Use test---Create or switch databases

3) Db.dropdatabase ()--Delete the current database

3) Show Collections: Displays the collection created by the current database (table)

4) Exit: Exit MongoDB Client

5) MongoDB Installation:





First, the beginning of Mogodb: basic Introduction to delete and change the search

The advantages of MongoDB:

1) Built-in Sharding: Provides a range-based auto sharding mechanism: A collection can follow the recorded range,

Divided into several segments, sliced into different shard. The right time can be painless upgrade

2) Support distributed, superior performance

In the case of use, tens other document objects, nearly 10G of data, the indexed ID query will not be slower than MySQL,

The query for non-indexed fields is a complete win. MySQL is not really competent for queries of any field under large data volume,

and MongoDB's query performance is really surprising. Write performance is also very satisfying, also write millions data,

The basic 10 minutes can be resolved

There are three triples in MongoDB: A database, a collection, a document, a "collection", a "table" in the corresponding relational database, and a "document" corresponding to "rows."


Download

On the MongoDB official website, we found that there are 32bit and 64bit, this depends on your system, but here are two points to note:

①: According to industry rules, even "stable" (for example: 1.6.x,1.8.x), Odd for "development" (for example: 1.7.x,1.9.x),

The differences between the two versions are sure to be known to everyone.

②:32bit MongoDB can only store 2G of data, 64bit there is no limit


Start

Before starting, we'll assign a folder to MongoDB, named "DB", to store MongoDB data

Then run Mongod to open the command while using--dbpath to specify the data storage location as "DB" folder.

Example: Mongod--dbpath=e:\mongodb\db

Finally to see whether to open the success of the information learned that MongoDB uses 27017 port,

So we'll just type "http://localhost:27017/" in the browser.


Basic operations

Because it is the beginning, about the basic "additions and deletions to change", we open a cmd, enter the MONGO command to open the shell,

In fact, this shell is the client of MongoDB, but also a JS compiler, the default connection is the "test" database

1) Insert operation

Db.person.insert ({"Name": "Lxg", "Age": 20})

The database has, the next step is the collection, here to take the collection named "Person", note that the document is a JSON extension (Bson) Form

2) Find operation

Db.person.find ({"Name": "Lxg"});

After we insert the data, it's definitely going to find out here to pay attention to two points:

"_id": This field is the GUID that the database gives us by default, the purpose is to guarantee the uniqueness of the data

3) Update operation

Db.person.update ({"Name": "Lxg"},{"name": "WR", "Age": 20})

The first parameter of the Update method is "condition found" and the second parameter is "updated value"

4) Remove operation

Db.person.remove ({"Name": "Lxg"})

Remove if no parameters will delete all data, huh, very dangerous operation, in MongoDB is a non-revocable operation

Two, MongoDB second article: Elaborate additions and deletions to change the search


1) Insert operation

Common insert operations also exist in two forms: "Single insert" and "BULK Insert"

Single Insert

As I said earlier, the MONGO command opens a JavaScript shell. So the syntax of JS can work in this area,

Looks like it's not very bull X.

var single={"name": "Lxg"}

Db.user.insert (single);


2) Find operation

In the daily development, we play the query, play the most is two categories:

①:, >=, <, <=,! =, =.

②:and,or,in,notin

These operations are encapsulated in MongoDB, as described below:

<1> "$gt", "$gte", "$lt", "$lte", "$ne", "No special keywords" These are some examples of the one by one correspondence above

Db.user.find ({"Age": {$lt: 22}})

<2> "No keywords", "$or", "$in", "$nin" I also give a few examples

Db.user.find ({"Name": "Lxg", "Age": 20})

Db.user.find ($or, [{"Name": "Lxg"},{"name": "WR"}])

Db.user.find ({"name": {$in: ["Lxg", "WR"]}})

Db.user.find ({"name": {$nin: ["Lxg", "WR"]}})

<3> there's a special match in MongoDB, which is "regular expression," which is very powerful.

Db.user.find ({"Name":/^j/, "name":/e$/})

<4> Sometimes the query is very complex, very egg pain, but it doesn't matter, MongoDB gave us a big recruit, it is $where,

Why say, because $where in the value is we are very familiar with the very love of JS to help us Yimapingchuan

Db.user.find ({$where: function () {return this.name = = ' Jack '}})

3) Update operation: There are only two updates, the whole update and the local update

I was in the last article when using update, actually that update is the overall update

Partial update: Sometimes we just need to update a field instead of the overall update

① $inc modifier: $INC is the abbreviation for increase

Db.user.update ({"Name": "Jack"},{$inc: {"Age": 30}})

② $set modifier

Db.user.update ({"Name": "Jack"},{$set: {"Age": 10}})

<3> Upsert operation

This is the "word" created by MongoDB, you remember the first parameter of the Update method is "query condition"?

Then this upsert operation is to say: if not found, in the database added a, in fact, this also has the advantage,

is to avoid me in the database to determine whether the update or add operation, it is easy to use the third update

Parameter set to True

Db.user.update ({"Name": "Jack"},{$inc: {"Age": 1},true})--Add a record if the query is not

<4> Batch Update

In MongoDB if you match more than one, the default is to update only the first one, then if we have the need to batch update,

It is also very simple to implement in MongoDB, which is set to true in the fourth parameter of update. The example will not be lifted.


Three, Mogodb third: elaborate advanced operation


Aggregation: Common aggregation operations are similar to Oracle's: Count,distinct,group,mapreduce

<1> count: It's used just like in SQL

Db.person.count ()

Db.person.count ({"Age": 20})

<2> distinct: Who is assigned, who cannot repeat

Db.person.distinct ("Age")

Results: [20,22,26]

<3> Group

The group operation in MongoDB is a little complicated, but we are familiar with the group of Oracle inside the words can see clearly,

In fact, the group operation essentially formed a "k-v" model, like the dictionary in C #, well, with this thinking,

Let's take a look at how to use Group

The following example is the group operation according to age, value is the name of the corresponding age. Here's a look at these parameters:

Key: This is the group key, we are here for the age group.

Initial: Each group shares an "initialization function", paying special attention to: Each group, such as the age=20 value of the list share a

Initial function, age=22 also shares a initial function.

$reduce: The first parameter of this function is the current document object, the second parameter is the cumulative object of the last function operation, the first time

is {"Perosn" in initial: []}. How many documents are there, how many times $reduce will be called

Db.person.group ({

"Key": {"Age": true},

"Initial": {"person": []},

"Reduce": function (Cur,prev) {

Prev.person.push (Cur.name);

}

})

Results:

[

{"Age": 20,

"Person": [

"Lxg",

"WR",

"Lisi"

]

},

{"Age": 22,

"Person": [

"Jialiu",

"Zs"

]

}

]

See the above results, is not a bit of feeling, we see through the age of the corresponding name of the person, but sometimes we may have the following requirements:

①: Want to filter out age>25 some people.

②: Sometimes there are too many people in the person array, and I want to add a Count property to indicate it.

For the above requirements, in the group is still very good, because the group has so two optional parameters: condition and Finalize

Condition: This is the filter condition.

Finalize: This is a function, each set of documents after the execution, many will trigger this method, then in each set of collections plus count that it's alive

Db.person.group (

{"Key": {"Age": true},

"Initial": {"person": []},

"Reduce": function (doc,out) {

Out.person.push (Doc.name);

},

"Finalize": function (out) {

Out.count=out.person.length;

},

"condition": {"age": {$lt: 25}}

})

<4> MapReduce

This is the most complex of the aggregation functions, but complex, the more complex the more flexible.

MapReduce is actually a programming model, used in distributed computing, where there is a "map" function, a "reduce" function

①map: This is called the mapping function will call emit (Key,value), the collection will follow the key you specify to group the map

②reduce: This is called the simplification function, the data grouped by the map is grouped to simplify, note:

The key in reduce (Key,value) is the set of emit (value) in emit that Key,vlaue is grouped by emit.

Here is an array of {"Count": 1}

③mapreduce: This is the last function, the parameter is map,reduce and some optional parameters

Cursor

The cursor inside the MongoDB is somewhat similar to what we call a deferred execution in C #, such as:

var list=db.person.find ();

For such an operation, the list does not actually get the document in person, but instead declares a "query structure", which is passed when we need it.

For or next () loads it once, and then lets the cursor read line by row, and when we're done enumerating, the cursor is destroyed, and then we pass

When the list was fetched, no data was found to return

var list =db.person.find ();

List.foreach (function (x) {

Print (x.name);

})

-After the above code executes, you are executing the list, and you find that there is no data in the list;

Of course, our "query construction" can also make complex points, such as paging, sorting can be added in

var single=db.person.find (). Sort ({"Name", 1}). Skip (2). Limit (2);

So this "query construct" can be executed when we need to execute it, which greatly increases unnecessary expenses.


Iv. MongoDB Fourth: Basic operation of the index


We do the development of the day to avoid the performance optimization of the program, and the operation of the program is nothing more than curd,

Usually we spend 50% of our time on r, because the read operation is very sensitive to the user, where

Good reason will be spurned, hehe. There are 5 classic lookups from the algorithm, including our today

Referred to as "index lookup", if you know more about Oracle, I believe that index search can bring us what

Improve your performance. We first insert 10w data, speak:

Db.person.remove ()

for (Var i=0;i<100000;i++) {

var rand=parseint (I*math.random ());

Db.person.insert ({"Name": "Lxg" +i, "age": i});

}

(1) Performance analysis function (explain)

Well, the data has been inserted successfully, since we have to do analysis, it must have a tool for analysis, fortunately MongoDB

Provided us with a keyword called "explain", then how to use it? Or look at the picture, notice that the name here

The field does not have any index, so I'll look up a name for "name10000" here.

Db.person.find ({"Name": "Lxg" +10000}). Explain ()

Results:

{

"Cursor": "Basiccursor",

"nscanned": 100000,

"Nscanedobjects": 100000,

"N": 1,

"Millis": 114

}

Look closely at the red area, there are several key we care about.

Cursor: What do you mean by "basiccursor" here, which means that the lookup here is using "Table Scan",

That is, the order to find, very sad urge ah.

Nscanned: Here is 10w, that is to say the database browsed 10w documents, it is horrible, so it is unbearable to play

N: Here is 1, which is the final return of 1 documents.

Millis: This is our most most .... Care for something that takes 114 milliseconds altogether.

(2) Build index (ENSUREINDEX)

Find a document in a collection of 10w so simple it takes 114 milliseconds to get a little bit out of the way, okay, so we

How to optimize it? MongoDB has brought us an index lookup to see if we can make our query soar.

Db.person.ensureIndex ({"Name": 1})

Db.person.find ({"Name": "Lxg" +100000}). Explain ()

Results:

{

"Cursor": "Btreecursor name_1",

"Nscanned": 1,

"Nscannedobjects": 1,

"N": 1,

"Millis": 1

"Nyields": 0

}

Here we use the Ensureindex to build an index on name.

"1": Indicates ascending by name, "1": Indicates descending by name.

Come and look at these sensitive messages, my God.

Cursor: Here is the "btreecursor", the cow, MongoDB uses the structure of the B-tree to store the index, the index is named "Name_1"

nscanned: I wipe, the database only browsing a document is OK.

N: Direct position return.

Millis: Look at this time really can't believe, seconds seconds to kill.

(3) Unique index

As with SQL Server, a unique index can be created, and duplicate key values cannot be inserted naturally, and the use in MongoDB is as follows:

Db.person.ensureIndex ({"Name": 1},{"unique": true})

(4) Combined index

Sometimes our query is not single-condition, may be multi-conditional, such as looking for the "1989-3-2 's" named ' Jack ' classmate,

Then we can build a federated index of "name" and "birthday" to speed up the query

Db.person.ensureIndex ({"Name": 1, "Birthday": 1})

Db.person.ensureIndex ({"Birthday": 1, "name": 1})

Look, you may also know that the name is different from birthday, the index is different, the order of ascending and descending is different.

will produce different indexes, we can use getindexes to see exactly which indexes are generated in the next person collection.

Db.person.getIndexes ()

At this point we must be very curious, in the end the query optimizer will use which query as the operation, hehe, or look at

Db.person.find ({"Birthday": "1989-3-2", "Name": "Jack"}). Explain ()

After reading we want to believe the query optimizer, it gives us the choice to make is often optimal, because when we do the query, check

The query optimizer uses the indexes we build to create

MongoDB's Hints

If one execution finishes, the other query scheme is close, and the scheme is saved by MongoDB.

Of course, if you want to use your own designated query scheme, this is also possible, in MongoDB to provide us with

The hint method allows us to execute violently

Db.person.find ({"Birthday": "1989-3-2", "Name": "Jack"})

. Hint ({"Birthday": 1, "name": 1}). Explain ()

Delete Index

As business requirements change, the original index may not be necessary, and the index will reduce the cud of the three

The IDEX requires real-time maintenance, so the problem needs to be considered,

Clear out to illustrate: the use of dropindexes

Db.person.dropIndexes ("Name_1")

Db.person.dropIndexes ("Name_1_birthday_1")

Db.person.dropIndexes ("Birthday_1_name_1")

Db.person.getIndexes ()

V. MongoDB FIFTH: basic Java operations

Introducing Jars

Org.mongodb.mongo-java-driver

Com.google.code.gson.Gson

protected void _insert (Object insertobject) {

Dbcollection collection=getcollection ();

if (collection==null) {

Return

}

Collection.insert (InsertObject);

}

Establish Simpletest.java to complete simple MongoDB database operation

Mongo Mongo = new Mongo ();

This creates a MongoDB database connection object, which is connected by default to the current machine's localhost address, and the port is 27017.

DB db = Mongo.getdb ("test");

This will get a test database, such as MongoDB does not create this database is also able to function, MongoDB can

In the absence of the creation of this database, the completion of the data add operation, when added, without this library, MongoDB will automatically create

Build the current database. Get the db, next we want to get a "aggregate set Dbcollection", through the getcollection of the DB object

method to complete. Dbcollection users = db.getcollection ("users");

This gives you a dbcollection, which is equivalent to the "table" of our database.

Querying all data

Dbcursor cur = users.find ();

while (Cur.hasnext ()) {

System.out.println (Cur.next ());

}


Vi. Related Documents

http://blog.csdn.net/liuzhoulong/article/category/774845

Http://book.51cto.com/art/201211/363567.htm

Http://www.cnblogs.com/hoojo/archive/2011/06/02/2068665.html--java operation


Vii. the latest version of MongoDB 3.0

    • TJ: has highlighted method,MongoDB scheme architect

Mongo is a Latin humongous: Huge, Big Data

What is MongoDB? Open source database,oltr locating the Universal database, its flexible documentation

MongoDB headquarters in New York Silicon Valley, USA

Maximum Features:

Document Model {JSON}, a flexible and natural document

Client ---server ---database, data passed between layers three is a JSON document model

Maintenance and development are very convenient

    • High-availability replication cluster Replicaset

when MongoDB is deployed, it is required to configure the cluster at least one primary two from

specific usage scenarios:

1) Persistence of data

2) read/write separation

3) disaster ---"Financial industry

    • horizontally scaled shard cluster: Divides data into multiple slices shard, each of which is a single database

specific usage scenarios:

    • Performance scaling: New generation of Big data application requirements to solve high concurrency problems

    • Geographical distribution

    • Fast recovery: For example, a project representative or file is very large, if copied as a file, copy is quite slow, if it is divided into several slices to save to MongoDB , reply can be used in multi-threaded way to restore;

  • MongoDB3.0 Characteristics

  • Write performance up 7-10%

  • compression 30-80%

  • operation up Opsmanagers,

  • 3.0 introduces Wiredtiger engine, lock-free concurrency control

  • prior to 2.6, there was only one storage engine, followed by a variety of engines---"different engines have different priorities, depending on the needs of the choice, if there is a focus on reading, there is focus on writing, but also focused on caching ...

  • Performance metrics: concurrency and corresponding latency

  • High concurrency write: is a major feature of 3.0; referring to the scene, the Internet of things applications, such as selling insurance in foreign countries, will provide a client software, insert the car, your real-time information will be fed back to the insurance company, will analyze your behavior, if you install, the preferential 30%; So in foreign countries, the same car, may be very far from the premium This data is fed back to the big Data center in real time;

    • the compression of MongoDB

    • Default Usage SNAPPY compression , compression rate is not high, but fast

    • Zlib: High compression rate, more time

Needs to be selected

    • What scenarios are used by MongoDB:

heterogeneous, semi-structured /unstructured data, massive data, high concurrency, weak transactions

1) Heterogeneous: For example, multiple sets of systems to be merged, each DB model is not the same, how to merge energy, using the document model

MongoDB Introductory article

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.