Optimized _hibernate optimization scheme for optimizing _hibernate performance of Hibernate (next)

Source: Internet
Author: User
Tags documentation generator sessions jboss java se

(Source: http://xiexiejiao.cn/hibernate/hibernate-performance-optimization-b.html)

Hibernate is the author used more than 5 years of excellent ORM framework, although the use of 5 years, but the author is not sure to say that their true sense of proficiency in the Hibernate. Said familiar hibernate is also similar, because Hibernate usage and the characteristic as long as uses may be very simple, but must realize hibernate maximum potential,hibernate optimizes , or says Hibernate performance Optimization The author is just the first glimpse of the path. Here extracts a piece of cattle for hibernate optimization of the article, I hope that the future use of their guidance under the direction of it. This is a continuation of the last article, is really good, recommend hibernate users, especially like me to see the user, there will be harvested 4.6 hql tuning 4.6.1 Index Tuning

HQL looks very similar to SQL. The corresponding SQL WHERE clause can usually be guessed from the HQL where clause. The fields in the WHERE clause determine the index to which the database will be selected.

A common mistake most hibernate developers make is that whenever a new where clause is needed, a new index is created. Because indexes bring additional data update overhead, you should strive to create a small number of indexes to cover as many queries as possible.
Section 4.1 lets you use a collection to handle all possible data search conditions. If this is not practical, then you can use the backend profiling tool to create a collection of all the SQL involved in the application. You end up with a small set of indexes based on the classification of those search criteria. At the same time, you can also try to add an extra predicate to the WHERE clause to match the other where clause.

Example 7

There are two UI searchers and a back-end daemon finder to search for a table named Iso_deals. The first UI searcher has predicates on the Unexpectedflag, Dealstatus, Tradedate, and Isold properties.

The second UI searcher is based on a filter that the user types, including content that has other properties besides Tradedate and Isold. All of these filter properties are optional at first.
The back-end searcher is based on the Isold, Participantcode, and Transactiontype properties.
After further business analysis, it was found that the second UI searcher was actually selecting data based on some implicit unexpectedflag and dealstatus values. We also make tradedate a necessary property of the filter (each search filter should have the necessary properties in order to use the database index).

With this in mind, we constructed a composite index using Unexpectedflag, Dealstatus, Tradedate, and isold in turn. All two UI searchers can share it. (The order is important, if your predicate specifies these attributes in a different order or lists other attributes before them, the database will not select the composite index.) )

The back-end searcher and the UI searcher are so different that we have to construct another composite index for it, using Isold, Participantcode, and transactiontype in turn. 4.6.2 Binding Parameters vs. string concatenation

You can use binding parameters to construct a hql WHERE clause, or you can use string concatenation, a decision that has a certain effect on performance. The reason for using binding parameters is for the database to parse SQL at once and to reuse the resulting execution plan for subsequent duplicate requests, which saves CPU time and memory. However, to achieve optimal data access efficiencies, different binding values may require different SQL execution plans.

For example, a small range of data might return only 5% of the total data, while a large range of data might return 90% of the total data. The former is better with indexes, while the latter is best to use full table scans.

It is recommended that OLTP use binding parameters, and the Data Warehouse uses string concatenation because OLTP usually inserts and updates data repeatedly in one transaction, with only a small amount of data; The data warehouse usually has only a small number of SQL queries, and a certain execution plan is more important than saving CPU time and memory.

What if you knew that your OLTP search should use the same execution plan for different bound values?

Oracle 9i and later versions can explore parameter values the first time a binding parameter is invoked and an execution plan is generated. Subsequent calls do not probe again, but reuse the previous execution plan. 4.6.3 polymerization and sequencing

You can aggregate and "ORDER by" in the database, and you can load all the data in the application's service layer beforehand and do the aggregation and "order by" operations. It is recommended to use the former, because the database is usually better than your application in this area. In addition, this can save network bandwidth, which is also a way of porting across databases.

The exception is when your application has HQL specific business rules that are not supported for data aggregation and sorting. 4.6.4 Overlay Crawl Strategy

Detailed in section 4.7.1 . 4.6.5 Local Query

Local query tuning is actually not directly related to HQL. But HQL does allow you to pass local queries directly to the underlying database. We do not recommend this because local queries are not portable between databases. 4.7 grasping policy tuning

The crawl policy determines how and when the associated object is hibernate when the application needs to access the associated object. The 20th chapter of HRD, "improving performance", gives a good description of the topic, and we will focus on how it is used here. 4.7.1 Overlay Crawl Strategy

Different users may have different data crawl requirements. Hibernate allows you to define a data capture policy in two places, one in the mapping metadata and one in the HQL or criteria.

A common practice is to define a default crawl strategy in the mapping metadata based on the primary crawl case, overriding the crawl strategy for a few use cases in HQL and criteria.

Suppose Pojoa and Pojob are instances of a parent-child relationship. If, according to business rules, you only occasionally need to load data from both ends of the entity, you can declare a deferred load collection or proxy crawl (proxy fetching). When you need to get data from both ends of the entity, you can override the default policy with immediate grab (eager fetching), such as HQL or criteria configuration for connection fetching (join fetching).

On the other hand, if a business rule needs to load data from both ends of the entity most of the time, you can declare it immediately crawl and set the deferred load collection or agent crawl in the criteria to overwrite it (HQL does not currently support such overrides). 4.7.2 n+1 mode or reverse mode.

Select crawling can cause n+1 problems. If you know you always need to load data from the association, you should always use the connection crawl. In the following two scenarios, you might think of n+1 as a pattern rather than an inverse pattern.

In the first scenario, you don't know if the user will access the associated object. If he/she does not have access, then you win; otherwise you still need an extra n-time Select SQL statement. This is a dilemma.

In the second scenario, Pojoa and many other Pojo have one-to-many associations, such as Pojob and Pojoc. Using an immediate inner or outer join crawl will repeat the Pojoa many times in the result set. When there are a lot of non-null attributes in the Pojoa, you have to load a lot of data into the persistence layer. This load takes a lot of time, both for network bandwidth and if the hibernate session is stateful, there is also a reason for session caching (memory consumption and GC pauses).

If you have a long One-to-many association chain, such as from Pojoa to Pojob to Pojoc, the situation is similar.

You might want to use the DISTINCT keyword in hql or the distinct function in Cirteria or the Java set interface to eliminate duplicate data. But all of this is implemented in hibernate (in the persistence layer), not in the database.

If tests based on your network and memory configurations show that n+1 performance is better, you can use the bulk crawl, subselect crawl, or level two cache for further tuning.

Example 8

The following is a fragment of a HBM file using batch fetching:

<class
 Name
= "Pojoa
" 
table
= "Pojoa
"
>
...
<set Name = "Pojobs " Fetch = "Select " batch-size = "Ten " >

<key Column = "pojoa_id " />
...
</set >
</class >

The following are the SQL generated by a multiport pojob:

Select

 from

 where

 in

(?,?,?,?,?, ?,?,?,?,?);

The number of question marks is equal to the batch-size value. So the N-time additional Select SQL statement about Pojob was reduced to N/10.

If you replace fetch = "Select" with fetch = "Subselect" , the SQL statement generated by Pojob is like this:

Select

 from

 where

 in

(Select

 from where ...);

 

 

Although the N-time extra Select is reduced to 1 times, this only benefits when the query overhead of running Pojoa is very low.

If the pojob set in Pojoa is stable, or Pojob has Pojoa Many-to-one Association, and Pojoa is read-only reference data, you can use level two caching to cache Pojoa to eliminate n+1 problems (4.8.1 There is one example). 4.7.3 Delay Property Crawl

Unless you have a legacy table with a lot of fields that you don't need, you shouldn't use this crawl strategy because its deferred attribute groupings bring in extra SQL.

In the business analysis and design process, you should place different data acquisition or modification groupings into different domain object entities instead of using this crawl strategy.

If you cannot redesign a legacy table, you can use the projection function provided by HQL or criteria to get the data. 4.8 Two-level cache tuning

The description in section 20.2, "level two cache" of HRD, is too simplistic for most developers to make a choice. The "Cacheprovider" cache is no longer recommended for version 3.3 and later, and it is even more confusing to use a "regionfactory" cache. However, even the latest 3.5 reference documents do not mention how to use the new caching method.

For the following reasons, we will continue to focus on the old approach: only JBoss Cache 2, Infinispan 4 and Ehcache 2 are supported by all popular hibernate level two cache providers. Oscache, Swarmcache, coherence and gigaspaces Xap-data Grid only support old methods. Both methods share the same <cache> configuration. For example, they still use the same usage property value "Transactional|read-write|nonstrict-read-write|read-only". Multiple cache-region adapters still have built-in support for old methods, and understanding it can help you quickly understand new methods. 4.8.1 caching mechanism based on Cacheprovider

Understanding the mechanism is the key to making reasonable choices. The key class/interface is cacheconcurrencystrategy and its implementation classes for different caches in 4, as well as entityupdate/delete/insertaction.

For concurrent cache access, there are three implementation modes: read-only mode for "Read-only".

Neither the lock nor the transaction is affected because the cache is not changed since the data was loaded from the database. non-transaction-aware (Non-transaction-aware) read-write mode for "Read-write" and "Nonstrict-read-write".

Updates to the cache occur after the database transaction completes. The cache needs to support locks. Read and write for "transactional" transactions.

Updates to the cache and the database are packaged in the same JTA transaction, so that the cache is always synchronized with the database. Both the database and the cache must support JTA. Although the cache transaction is internally dependent on a cache lock, Hibernate does not explicitly call any of the cache lock functions.

Take a database update as an example. Entityupdateaction for transaction-aware read-write, "Read-write" Non transactional-aware read-write, and "nonstrict-read-write" non-transactional perceptual read-write corresponding to the following sequence of calls: updating the database in a JTA transaction , and update the cache in the same transaction. Soft Lock caching, updating the database in a transaction, updating the cache after the last transaction completes successfully, or releasing a soft lock.

A soft lock is only a specific method of caching value invalidation that prevents other transactions from reading and writing cache before it obtains a new database value. Those transactions will instead read directly to the database.

The cache must support locks, and transaction support is not required. If the cache is a cluster, the call to update cache pushes the new value to all replicas, which is often referred to as a push update policy. updates the database in one transaction, clears the cache before the last transaction completes, and, for security purposes, clears the cache again after the transaction completes, regardless of the success of the transaction.

Neither support for cache locks nor support transactions is required. In the case of a cached cluster, the purge cache call invalidates all replicas, which is often referred to as the "pull (pull)" Update policy.

The call sequence is similar for deletion or insertion of an entity, or for a collection change.

In fact, the last two asynchronous call sequences still guarantee the consistency of the database and the cache (essentially the "Read Committed" level of isolation), this is due to the soft lock in the second sequence and the "update cache" after "Update Database", as well as the pessimistic "purge cache" in the last call sequence.

Based on the above analysis, our suggestion is that if the data is read-only, such as referencing data, then always use the "read-only" policy, because it is the simplest, most efficient strategy and the strategy for cluster security. Unless you really want to put cache updates and database updates in a JTA transaction, do not use the "transactional" policy because JTA requires a lengthy two-phase commit process, which makes it essentially the worst performing strategy.

According to the author, the level two cache is not a primary data source, so the use of JTA may not be reasonable. In fact, the last two call sequences are a good alternative to most scenarios, thanks to their data consistency guarantees. If your data reads a lot or has few concurrent cache accesses and updates, you can use the "nonstrict-read-write" policy. Thanks to its lightweight "pull" update strategy, it is usually the second best performance strategy. If your data is read and written, use the "Read-write" strategy. This is usually the second-lowest performance strategy because it requires a cache lock to cache a heavyweight "push" update policy in the cluster.

Example 9

The following is an ISO charge type HBM file fragment:

<class
 Name
= "Isochargetype
" >

<property Name = "Isoid " column = "iso_id " not-null = "true " />

<many-to-one Name = "Estimatemethod " fetch = "Join " lazy = False " />

<many-to-one Name = "Allocationmethod " fetch = "Join " lazy = False " />

<many-to-one Name = "Chargetypecategory " fetch = "Join " lazy = " False " />

</class >

Some users need only the ISO fee type itself; some users require an ISO charge type and three associated objects. For simplicity, developers will immediately load all three associated objects. This is common if no one in the project is in charge of hibernate tuning.

4.7. In section 1 , the best method was mentioned. Because all of the associated objects are read-only referencing data, another method is to use deferred fetching to open the two-level cache of these objects to avoid n+1 problems. In fact, the former method can also benefit from referencing the data cache.

Because most projects have a lot of read-only reference data that is referenced by other data, both of these methods can improve global system performance. 4.8.2 regionfactory

The following table is the main class/interface for both new and old methods:

New Method Old Ways
Regionfactory Cacheprovider
Region Cache
Entityregionaccessstrategy Cacheconcurrencystrategy
Collectionregionaccessstrategy Cacheconcurrencystrategy

The first improvement is that regionfactory constructs specific region, such as entityregion and transactionregion, rather than using a generic access region. The second improvement is that for the "usage" attribute value of a particular cache, region requires that you build your own access policy, rather than all the 4 implementations of Cacheconcurrencystrategy that all region have been using.

To use the new method, you should set the Factory_class rather than the Provider_class configuration properties. Take Ehcache 2.0 as an example:

<property name= "Hibernate.cache.region.factory_class" >
Net.sf.ehcache.hibernate.EhCacheRegionFactory
</property>

Other related hibernate cache configurations are the same as the old method.

The new method can also be backward compatible with legacy methods. If you are only equipped with Cacheprovider, the new method will implicitly invoke the old interface/class using the following self-explanatory (self-explanatory) adapters and bridges:

Regionfactorycacheproviderbridge, Entityregionadapter, Collectionregionadapter, Queryresultsregionadapter, Entityaccessstrategyadapter and Collectionaccessstrategyadapter 4.8.3 query Caching

The second-level cache can also cache query results. This can also be helpful if the query is expensive and runs repeatedly. 4.9 Batch processing tuning

Most hibernate features are ideal for OLTP systems where each transaction typically handles only a small amount of data. However, if you have a data warehouse or transaction that requires a lot of data to be processed, that's a different story. 4.9.1 non-DML style batch with stateful session

If you're already using a regular session, that's the most natural way to do it. You need to do three things: Configure the following 3 properties to turn on the batch feature:

  Hibernate.jdbc.batch_size 30
Hibernate.jdbc.batch_versioned_data true
Hibernate.cache.use_second_level_cache false

Batch_size setting to positive values turns on the JDBC2 batch update, Hibernate's recommended value is 5 to 30. Based on our tests, extremely low values and extremely high value performance are poor. As long as the value within a reasonable range, the difference is only a few seconds. If the network is fast enough, the result is certain.

The second configuration is set to true, which requires that the JDBC driver return the correct number of rows in the ExecuteBatch () method. For Oracle users, batch updates cannot be set to true. Read the "Update count in a standard batch Oracle implementation" in Oracle's JDBC Developer ' s Guide and Reference (update Counts in Oracle implementation of ST Andard batching) For more detailed information. Because it's still safe for bulk inserts, you can create a separate private data source for bulk inserts. The last configuration entry is optional because you can explicitly turn off the level two cache in a session. Refresh periodically (flush) and purge primary session caching as in the following example:

Session session = Sessionfactory.opensession ();
Transaction tx = Session.begintransaction ();

for (int i=0; i<100000; i++) {
Customer customer = new Customer (...);
If your hibernate.cache.use_second_level_cache is true, call the following:
Session.setcachemode (Cachemode.ignore);
Session.save (customer);
if (i% = = 0) {//50, same as the JDBC batch size
Flush a batch of inserts and release memory:
Session.flush ();
Session.clear ();
}
}
Tx.commit ();
Session.close ();

Batch processing usually does not require data caching, otherwise you will run out of memory and increase the GC overhead. This is obvious if memory is limited. Bulk inserts are always nested within a transaction.

The less the number of objects modified per transaction means more database submissions, as each commit in section 4.5 brings disk-related overhead.

On the other hand, the greater the number of objects modified per transaction means the longer the lock change time, and the larger redo log is required for the database. 4.9.2 non-DML style batch with stateless session

A stateless session is better executed than the previous one because it is simply a simple wrapper for JDBC and can circumvent many of the actions required by a regular session. For example, it does not require session caching, nor does it interact with any level two cache or query cache.
However, its usage is not simple. In particular, its operations are not cascaded to the associated instance; You have to deal with them yourself. 4.9.3 DML Style

With DML-style inserts, updates, or deletions, you manipulate the data directly in the database, which is different from the first two methods of manipulating data in hibernate.

Because a DML-style update or deletion is equivalent to multiple separate updates or deletions in the first two methods, if the WHERE clause in the update or deletion implies an appropriate database index, then using DML-style operations can save network overhead and perform better.

It is strongly recommended that you use DML-style operations and stateless sessions in combination. If you are using a stateful session, do not forget to clear the cache before you perform DML, otherwise hibernate will update or clear the related cache (see example 10 below). 4.9.4 Bulk Load

If your hql or criteria return a lot of data, notice two things: Turn on the bulk crawl feature with the following configuration:

Hibernate.jdbc.fetch_size 10

Fetch_size set to positive value will open the JDBC Bulk crawl feature. Relatively fast networks, which are more important in slow-speed networks. Oracle's recommended experience value is 10. You should test on your own environment. You turn off caching when using either of these methods, because bulk loading is typically a one-time task. Limited to memory, loading large amounts of data into the cache often means that they will soon be purged, which increases the GC overhead.

Example Ten

We have a background task that loads a large amount of isodeal data for subsequent processing. We will also update the segmented data to the downstream system before processing it to the processing state. The largest segment has 500,000 rows of data. The following is an excerpt from the original code:

Query query = Session.createquery ("from Isodeal D WHERE chunk-clause
");
Query.setlockmode ("D" , Lockmode.upgrade );//for inprocess status update
list<isodeal> isodeals = Query.list ();
for (Isodeal isodeal:isodeals) {//update status to InProcess
Isodeal.setstatus ("inprocess" );
}
return isodeals;

The method that contains the above code adds annotations to the spring 2.5 declarative transaction. It took about 10 minutes to load and update 500,000 rows of data. We identified the following problems: The system frequently overflows memory due to session caching and level two caching. Even if there is no memory overflow, the cost of GC can be significant when memory consumption is high. We have not set up fetch_size. Even if we set up the batch_size,for loop, we create too many update SQL statements.

Unfortunately, Spring 2.5 does not support hibernate stateless sessions, so we can only turn off level two caching, set fetch_size, and replace the for loop with DML-style updates to improve performance.

However, the execution time is still 6 minutes. After the Hibernate log level was changed to trace, we found that updating the session cache caused a delay. By clearing the session cache before the DML update, we shortened the time to 4 minutes, all the time it took to load the data into the session cache. 4.10 SQL Generation Tuning

This section will show you how to reduce the number of SQL builds. 4.10.1 n+1 crawling problem

The Select crawl policy can cause n+1 problems. If the "Connect Crawl" strategy is right for you, you should always use this strategy to avoid n+1 problems.

However, if the connection crawl policy does not perform well, as in section 4.7.2 , you can reduce the number of extra SQL statements you need by using Subselect crawl, batch crawl, or deferred collection crawl. 4.10.2 insert+update Problem

Example One

Our electricitydeal has a one-way One-to-many association with Dealcharge, as shown in the following HBM file fragment:

<class
 Name
= "Electricitydeal
"

Select-before-update = "true " dynamic-update = "true "

Dynamic-insert = "true " >
<id Name = "Key " column = "id " >

<generator class = "sequence " >
<param name = "Sequence " >seq_electricity_deals</param >

</generator>
</id>
...
<set Name = "Dealcharges " cascade = "All-delete-orphan " >

<key Column = "Deal_key " not-null = "False " Update = "true "

On-delete = "noaction " />
<one-to-many Class = "Dealcharge " />
</set> </class>

In the "key" element, the default value for "Not-null" and "Update" is false and true, and the code above is written to clarify these values.

If you want to create a electricitydeal and 10 dealcharge, you will generate the following SQL statement: 1 sentence electricitydeal INSERT statement, 10 sentence dealcharge INSERT statement, which does not include foreign key "Deal_key"; 10 sentence dealcharge The UPDATE statement for the field "Deal_key".

To eliminate the additional 10-sentence UPDATE statement, you can include "Deal_key" in the 10-sentence Dealcharge INSERT statement, and you need to modify "Not-null" and "Update" to True and false respectively.

Another approach is to use bidirectional or many-to-one associations to allow dealcharge to manage associations. Execute Select before 4.10.3 update

In Example 11, we added a select-before-update to Electricitydeal, which produces extra SELECT statements for instantaneous (transient) objects or detach (detached) objects, but avoids unnecessary database updates.

You should make a trade-off that if the object has few attributes and does not need to prevent unnecessary database updates, then do not use this feature, because your limited data does not have much network transport overhead and does not cause too much database update overhead.

If the object has more properties, such as a large legacy table, you should turn on the feature and use it in conjunction with "dynamic-update" to avoid too much database update overhead. 4.10.4 Cascade Delete

In Example 11, if you want to delete 1 electricitydeal and its 100 dealcharge,hibernate will do 100 delete for Dealcharge.

If you modify "On-delete" to "cascade", Hibernate does not perform a dealcharge delete action, but instead lets the database automatically delete the 100 cascade according to the on dealcharge delete constraint. However, it is necessary for the DBA to turn on the on CASCADE delete constraint, which most DBAs are unwilling to do because they want to avoid the accidental deletion of the parent object cascading onto its dependent object. Also, note that this feature bypasses the commonly used optimistic lock policy for version data (versioned hibernate). 4.10.5 Enhanced sequence identifier Builder

The sequence of Oracle is used in example 11 as the identity wildcard builder. Suppose we save 100 electricitydeal,hibernate the following SQL statement executes 100 times to get the next available identifier:

Select

 from

 Dual

If the network is not fast, it will undoubtedly reduce efficiency. An enhanced generator, "Sequencestylegenerator", is added to the 3.2.3 and subsequent versions, with two optimizer: Hilo and pooled. Although the 5th chapter of HRD's "base O/R mapping" speaks of these two optimizations, the content is limited. The two optimizer uses the Hilo algorithm, which generates an identifier equal to the Hi value plus lo value, where the Hi value represents the group number, the LO value sequence and repeats from 1 to the maximum group size, and the group number "goes back" to 1 o'clock plus 1 in the LO value.

Assuming that the group size is 5 (which can be represented by Max_lo or increment_size parameters), here's an example:

Hibernate performance Optimization Hilo Optimizer

The group number is taken from the next available value in the database sequence, and the HI value is defined by Hibernate, which is the group number multiplied by the Increment_size parameter value. Pooled Optimizer

The hi value is taken directly from the next available value in the database sequence. The increment of the database sequence should be set to the Increment_size parameter value.

Until the value in the memory group is depleted, two of the optimizer accesses the database, and the example above accesses the database once for each of the 5 identifier values. When using the Hilo Optimizer, your sequence can no longer be used by other applications unless they use the same logic as hibernate. Using the pooled optimizer is quite secure when other applications use the same sequence.

There is a problem with all two of the optimizer if the hibernate crashes, some identifier values in the current group are lost, but most applications do not require a sequential identifier value (if your database, say, Oracle, caches the sequence value, you lose the identifier value when it crashes).

If you use the pooled optimizer in example 11, the new ID is configured as follows:

<id
 Name
= "Key"
 column
= "id"

>

<generator Class = "Org.hibernate.id.enhance ." Sequencestylegenerator " >
<param Name = "Sequence_name" > seq_electricity_deals</param>

<param Name = "Initial_value" > 0</param>
<param Name = "Increment_size" > 100</param>

<param Name = "Optimizer" > pooled</param>
</generator>
</id>
5 Summary

This article covers most of the tuning techniques you'll find useful in Hibernate application tuning, and most of the time you talk about tuning topics that work well but lack documentation, such as inheritance mappings, level two caching, and enhanced sequence identification wildcard builder.

It also mentions the database knowledge necessary for some hibernate tuning. Some examples contain practical solutions to the problems you may encounter.

In addition, it is worth mentioning that hibernate can also be used with In-memory Data Grid (IMDG), such as Oracle's Coherance or Gigaspaces IMDG, which allows your application to reach the millisecond level. 6 Resources

[1] Latest Hibernate Reference documentation on jboss.com

[2] Oracle 9i Performance tuning Guide and Reference

[3] Performance Engineering on Wikipedia

[4] Program optimization on Wikipedia

[5] Pareto principle (the 80/20 rule) on Wikipedia

[6] Premature optimization on acm.org

[7] Java performance tuning by Jack Shirazi

[8] The leaky abstractions by Joel Spolsky

[9] Hibernate ' s statisticsservice Mbean configuration with Spring

[Ten] Jprobe by Quest Software

[One] Java VISUALVM

[of] column-oriented DBMS on Wikipedia

[[] Apache DBCP Basicdatasource

[] JDBC Connection Pool by Oracle

[A] Connection Failover by Oracle

[[] Last Resource Commit optimization (LRCO)

[[] gigaspaces for Hibernate ORM Users about the author

Yongjun Jiao is the technical director of SunGard Consulting Services. He has been a professional software developer for the past 10 years, and his expertise includes Java SE, Java EE, Oracle, and application tuning. His recent focus is on high-performance computing, including memory data grids, parallel computing, and grid computing.

Stewart Clark is the head of SunGard Consulting services. He has been a professional software developer and project manager for the past 15 years, and his expertise includes Java core programming, Oracle and energy trading.

Excerpt from: Infoq
English Address: http://www.infoq.com/articles/hibernate_tuning

Related Posts:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.