Hibernate optimizes query policies

Source: Internet
Author: User

Many people think that hibernate is inherently inefficient. Indeed, in general, it is necessary to convert execution into SQL statements.

The efficiency of Hibernate is lower than that of direct JDBC access. However, after better performance optimization, the performance of Hibernate is still improved.

People are quite satisfied, especially after the application's second-level cache, they can even get better performance of JDBC that does not use the cache. below

This section describes some common hibernate optimization strategies:

1. Capture Optimization

Crawling refers to how hibernate gets the associated Object Policy when navigating between associations.

It defines two aspects: how to capture and when to capture

1) how to capture.

Hibernate3 has two main capturing Methods: Apply to object-to-one and one-to-one instances and object Association sets.

(Set, MAP, etc.), a total of four variants

Join crawling: You can use outer join in a select statement to obtain the associated instance or set of objects)

Select Capture: an additional SELECT statement is sent to capture the associated entities and sets of the current object.

In my development experience, the performance optimization here is relatively limited, and it is not worth too much attention.

Example:
A. apply to object-associated instances (default value: false)
<Role-to-one name = "..." outer-join = "True/false/auto".../>
B. apply to the object association set (Auto by default)
<Set name = "..." Fetch = "join/select"...>
....
</Set>
2) When to capture
It mainly includes delayed loading and instant capturing. By default, hibernate3 uses delayed loading for object associations, and general attributes use vertical

That is, the crawling performance can be improved by several times compared with the optimization without latency loading and appropriate capture granularity.

Capture now: when capturing a host object, capture its associated objects, associated sets, and attributes at the same time.

Delayed loading: when capturing a host object, it does not capture its associated objects, but is loaded only when the object is called.

Example:
A. apply to object associated instances (delayed loading by default)
<Role-to-one name = "..." lazy = "True/false".../>
B. apply to object association set (delayed loading by default)
<Set name = "..." lazy = "True/false"...>
....
</Set>
For delayed loading, note that the use of delayed objects must be performed before the session is closed.

Lazyinitalizationexception is often caused by the use of delayed loading objects outside the life cycle of the session. When we perform Web

During development, you can use the opensessioninview mode. When a request starts to open a session, the session is closed only when the request response ends.

Session. However, when using the opensessioninview mode, you must note that if the response time is long (the business is complicated or

The client is a low-speed network). If session resources (that is, database connections) are used up for too long, the resources may be exhausted.


3) Capture Granularity
Capture granularity refers to the number of objects that are preloaded at a time when they are navigated between links. hibernateProgramPoor performance

The reason is that you have not carefully considered the capture granularity. When loading a list and navigating each object in the list

It causes n + 1 SQL statement query.
Example:
A. apply to object associated instances (default value: 1). Note that object associated instances are set on the associated objects, for example
Class user
{
Group G;
}
The capture granularity should be above the group configuration file. For details, see
<Class name = "group" table = "group" batch-size = "...">
...
</Class>
There is no common value for this value. It depends on the situation. If the data in the joined table is small, you can set it to a smaller value.

Some, 3-20, if it is relatively large, you can set to 30-50, note that the more, the better, when its value exceeds 50, the performance

There is not much improvement, but it consumes no memory.

For example:
List <user> Users = query. List ();
If there are 20 users and the 20 users and their groups are traversed, if you do not set batch-size (that is, batch-size = "1 "),

In the worst case, 1 + 20 SQL statements are required. If you set batch-size = "10", you only need

1 + 2 SQL statements
B. apply to object association set (default value: 1)
<Set name = "..." batch-size = ""...>
....
</Set>
2. Secondary Cache
Hibernate caches data at two levels: one-level cache at the session level, mainly object cache with its ID

Key to save the object, which exists during the lifecycle of the session; Level 2 cache, at the sessionfactory level, with the object cache and

Query cache: the query cache stores the query results with the key as the query condition, which exists during the life of sessionfactory. Default

Hibernate only enables level-1 caching. By correctly using level-2 caching, unexpected performance is often achieved.

1) object cache:
After an object is captured, hiberate caches the object with the ID as the key. When an object with the same ID is captured next time, you can use

Under Configuration
Method 1: Configure on the cache object
<Class...>
<Cache useage = "Read-Only/Write/..." regions = "group"/>
</Class>
Useage indicates the type of cache used, such as read-only cache and read/write cache. For details, see the hibernate reference guide.

Note that some caches do not support read/write caching in the implementation of hibernate. For example, the implementation of jbosscache in hibernate

Is only a read-only cache. For more information about how the cache supports the cache type, see org. hibernate. cache package regions.

Indicates cache segments. Most cache implementations usually block the cache. This part is optional. For details, see cache implementations.

Method 2: Configure in hibernate. cfg. xml
<Cache class = "..." useage = "..." regions = "..."/>
I think the second method is better and can be managed in a unified manner.
2) query Cache
When querying, save the query results with the query condition as the key. You need to configure the following:
A. Configure in hibernate. cfg. XML (enable query cache)
<Property name = "hibernate. cache. use_query_cache"> true </property> (for the preceding attribute names, see constants.

Org. hibernate. cfg. enviroment. use_query_cache)
B. Program
Query. setcacheable (true );
Query. setcacheregions (...);
It should be noted that the query cache should be more effective in combination with the object cache, because the query cache only caches the primary key data of the query result list.

In development, data that is relatively stable and frequently referenced, such as data dictionary, is cached in the second level,

Perform a second-level cache for queries that frequently change query conditions and query data. Because the second-level cache is

Put it in the memory, and the hibernate cache is not weekreference, so be careful not to load large data blocks

Otherwise, the memory may put a lot of pressure.

3. Batch Data Operations
When performing mass data operations (tens of thousands or even tens of millions), pay attention to the following two points: 1. Batch submission; 2. Timely cleaning.

Required level-1 cache data
1) The so-called batch commit is to avoid frequent use of session flush. Every time the session is flush, Hibernate will store Po data in the data.

Database Synchronization is a performance disaster for sea-level data operations (submit thousands of data records at the same time and submit one piece of data flush at a time)

The difference may be dozens of times the difference ). Generally, data operations are placed in transactions. hibernate automatically performs this operation when the transaction is committed.

Flush operation.
2) Clear unnecessary level-1 cache data in a timely manner: Because hibernate uses level-1 cache by default, during the session lifecycle

After data is captured, it will be placed in the first-level cache. When the data size is large, the data captured into the memory will cause non-memory pressure.

Usually large. Data is usually operated in batches. After an operation, the first-level cache is cleared, such as session. Clear (user. class)
4. Miscellaneous
Dynamic-insert, dynamic-Update, dynamic insert, and dynamic update, which means that only non-empty numbers are inserted when hibernate inserts data.

Only changed data is modified when data is modified. For example

Class user
{
ID
Username
Password
}

If u. ID = 1, U. Username = "ayufox", U. Password = NULL, if dynamic insert is not set, its SQL statement is insert

Into users (ID, username, password) values (1, 'ayufox', '). If this parameter is set, the SQL statement is insert into users (username) valeus ('ayufox ')
In the preceding case, if u. Password = '11' is modified, the SQL statement is

Update users set username = 'ayufox', password = '11' where id = 1.

Update user SET Password = '11' where D = 1

The settings are in the ing file of the class, as shown below:
<Class name = "user" table = "users" dynamic = insert = "True/false" dynamic-update = "True/false"...>
</Class>
This setting has limited performance improvement.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.