Conclusion:
1, 200w data, reasonable use of the index case, a single stationid under 4w data. MongoDB Query and sorting performance ideal, no regular when the client can complete the query in 600ms+, qps300+. When there is a regular client can complete the query in 1300ms+, qps140+.
2, MongoDB count performance is relatively poor, not concurrent case client can complete the query in 330ms, in the case of concurrent need 1-3s. Methods to consider the total number of estimates, http://blog.sina.com.cn/s/blog_56545fd30101442b.html
Test environment: MongoDB uses replica set,1 primary 2 from, 96G memory, version 2.6.5
Mem Consumption (collection of 4 200w data):
Space consumption (final selected collection for test data):
JVM:-xms2g-xmx2g
Ping Delay 33MS
Queries are all using readpreference.secondarypreferred ()
No regular
1, create Stationid, Firmid compound primer query Scene (200w set, 12 fields)
Number of queries: 20000
Query conditions: Multi-condition Query 10 records, and get the record-by-article
String key = "Spring" + r.nextint (1000);
Pattern pattern = pattern.compile (key);
Basicdbobject queryobject = new Basicdbobject ("Stationid",
new Basicdbobject ("$in", New integer[]{20})
. Append ("Firmid", New Basicdbobject ("$gt", 5000))
. Append ("Dealcount", New Basicdbobject ("$gt", R.nextint (1000000 ))); dbcursor cursor = Collection.find (queryobject). Limit. Skip (2);
Concurrent: 200
Time Consuming: 61566
Single time Consuming (server): 124ms
qps:324.85
2, create Stationid, Firmid compound primer query Scene (200w set, 12 fields)
Number of queries: 20000
Query criteria: Multiple conditions Query 10 records sorted, and get the record
String key = "Spring" + R.nextint (m);
Pattern pattern = pattern.compile (key);
Basicdbobject queryobject = new Basicdbobject ("Stationid",
new Basicdbobject ("$in", New Integer[]{4,))
. Append ("Firmid", New Basicdbobject ("$gt", 5000))
. Append ("Dealcount", New Basicdbobject ("$gt", R.nextint (1000000 ))); dbcursor cursor = Collection.find (queryobject).
Sort (new Basicdbobject ("Firmid", 1)). Limit (a). Skip (2);
Concurrent: 200
Time Consuming: 63187
Single time Consuming (server): 119ms
qps:316.52
3, create Stationid, Firmid compound primer query Scene (200w set, 12 fields)
Number of queries: 2000
Query criteria: Multiple criteria Query record number
String key = "Spring" + R.nextint (m);
Pattern pattern = pattern.compile (key);
Basicdbobject queryobject = new Basicdbobject ("Stationid",
new Basicdbobject ("$in", New Integer[]{4,))
. Append ("Firmid", New Basicdbobject ("$gt", 5000))
. Append ("Dealcount", New Basicdbobject ("$gt", R.nextint (1000000 )));
Long Count = Collection.count (queryobject);
Concurrent: 200
Time Consuming: 21887
Single time Consuming (client): 280ms
qps:91.38
There are regular
4, create Stationid, Firmid compound primer query Scene (200w set, 12 fields)
Number of queries: 20000
Query conditions: Multi-condition Query 10 records, and get the record-by-article
String key = "Spring" + r.nextint (1000);
Pattern pattern = pattern.compile (key);
Basicdbobject queryobject = new Basicdbobject ("Stationid",
new Basicdbobject ("$in", New integer[]{20})
. Append ("Firmid", New Basicdbobject ("$gt", 5000))
. Append ("Dealcount", New Basicdbobject ("$gt", R.nextint (1000000 ))
. Append ("Firmname", pattern);
dbcursor cursor = Collection.find (queryobject). Limit. Skip (2);
Concurrent: 200
Time Consuming: 137673
Single time Consuming (server): 225ms
qps:145.27
5, create Stationid, Firmid compound primer query Scene (200w set, 12 fields)
Number of queries: 20000
Query criteria: Multiple conditions Query 10 records sorted, and get the record
String key = "Spring" + r.nextint (1000);
Pattern pattern = pattern.compile (key);
Basicdbobject queryobject = new Basicdbobject ("Stationid",
new Basicdbobject ("$in", New Integer[]{4,))
. Append ("Firmid", New Basicdbobject ("$gt", 5000))
. Append ("Dealcount", New Basicdbobject ("$gt", R.nextint (1000000 ))
. Append ("Firmname", pattern);
dbcursor cursor = Collection.find (queryobject).
Sort (new Basicdbobject ("Firmid", 1)). Limit (a). Skip (2);
Concurrent: 200
Time Consuming: 138673
Single time Consuming (server): 230ms
qps:144.22
6, create Stationid, Firmid compound primer query Scene (200w set, 12 fields)
Number of queries: 2000
Query criteria: Multiple criteria Query record number
String key = "Spring" + r.nextint (1000);
Pattern pattern = pattern.compile (key);
Basicdbobject queryobject = new Basicdbobject ("Stationid",
new Basicdbobject ("$in", New Integer[]{4,))
. Append ("Firmid", New Basicdbobject ("$gt", 5000))
. Append ("Dealcount", New Basicdbobject ("$gt", R.nextint (1000000 ))
. Append ("Firmname", pattern);
Long Count = Collection.count (queryobject);
Concurrent: 200
Time Consuming: 23155
Single time Consuming (client): 330ms
qps:86.37
MongoDB Index Features
1, the composite index must hit the first field, otherwise it will not take effect. The fields that follow can be hit without order.
2, the more the composite index fields occupy more space, but the impact on query performance is not large (except for array indexes).
3. The index is selected according to the Sort field, and the priority exceeds the non-first field in the composite index.
4, hit the composite index case, the data volume <10w, filtering the non-indexed field, the efficiency is also relatively high.
5, full-text search performance is poor, 200w data hit 50w, the full text search needs 10+s, is the need for 1s.
MongoDB client configuration, you can propose to make spring injection, set the maximum number of connections and so on.
Mongoclientoptions options =
mongoclientoptions.builder (). Maxwaittime (1000 * 2)
. Connectionsperhost (500 ). Build ();
Mongoclient = new Mongoclient (arrays.aslist new ServerAddress ("10.205.68.57", 8700),
new ServerAddress (" 10.205.68.15 ", 8700),
new ServerAddress (" 10.205.69.13 ", 8700)), options);
Mongoclient.setreadpreference (readpreference.secondarypreferred ());
MongoDB Research _ conclusion. Docx is the test data in the final scene, which is divided into regular and no regular.
MongoDB Research _remote.docx for testing the data in the process, there may be a cache and so on, not necessarily accurate, work reference.
Do you know about MongoDB query optimization principles? below to introduce you, the specific content as follows:
1. Selecting an index on a field of query criteria, sorting criteria, and statistical criteria can significantly improve query efficiency.
2. Use $or to match the most results of the conditions on the front, with $and to match the least results of the conditions on the front.
3. Use limit () to limit the size of the result set, reduce the resource consumption of the database server, and the amount of data transferred by the network.
4. Try to use less $in, but break down into a single query. Especially on the fragment, $in will let your query go to each fragment to check once, if you really want to use, first in each slice on the index.
5. As far as possible without fuzzy matching query, with other exact matching query instead, such as $in, $nin.
6. Large query volume, concurrent large situation, through the front-end plus cache solution.
7. Can not use the Safe mode of operation without safe mode, so that the client does not need to wait for the database to return query results and handling exceptions, fast an order of magnitude.
8.MongoDB Intelligent Query Optimization, judging the granularity of query conditions, and skip and limit are not in its judgment, when the page query the last few pages, first order reverse ordering.
9. Minimize cross fragment query, balance balance number is less.
10. Query only the fields you want to use without querying all the fields.
11. When updating the value of a field, using $inc is more efficient than update.
12.apped collections is more efficient in reading and writing than ordinary collections.
13.server-side processing a stored procedure similar to a SQL query to reduce the overhead of network traffic.
14. Use hint () to force an index query if necessary.
15. If you have your own primary key columns, use your own primary key columns as IDs, which can save space and do not need to create additional so.
16. Using explain, optimize according to Exlpain plan.
17. When the scope of the query as far as possible with $in, $nin instead.
18. View the database query log, the specific analysis of the inefficient operation.
19.MONGODB has a database optimization tool, Profiler, that can detect the performance of database operations. You can find that performance is inefficient in query or write operations to optimize for these operations.
20. Try to put more operations on the client, of course, this is one of the MongoDB design concept.