Comparison between Hibernate and Ibatis

Source: Internet
Author: User

Hibernate and Ibatis Comparison

Hibernate is currently the most popular O/R mapping framework, and it comes from sf.net and is now a part of JBoss.
Ibatis is another excellent O/R mapping framework that is now part of Apache's sub-project.
Ibatis is an ORM implementation of "SQL mapping" relative to Hibernate "O/R".
Hibernate provides a more complete encapsulation of the database structure, and Hibernate's O/R mapping implements the mapping between Pojo and database tables, as well as the automatic generation and execution of SQL. Programmers often only need to define the Pojo to the database table mapping relationship, you can use Hibernate provides methods to complete the persistence layer operation. Programmers do not even need to be proficient in SQL, and HIBERNATE/OJB automatically generates the corresponding SQL and invokes the JDBC interface to execute according to the established storage logic.
The focus of Ibatis is the mapping relationship between Pojo and SQL. In other words, Ibatis does not automatically generate SQL execution for programmers at run time. The specific SQL needs to be written by the programmer, and then mapped to the specified Pojo by mapping the configuration file, the parameters required for SQL, and the returned result fields.
Using the ORM mechanism provided by Ibatis, for business logic implementations, the face is purely Java objects.
This layer is basically consistent with the ORM implementation, and Hibernate automatically generates SQL statements for specific data operations, while Ibatis requires developers to write specific SQL statements. Relative to Hibernate, ibatis the workload of SQL development and the Compromise of database portability, which provides more free space for system design.
Hibernate vs. Ibatis:

1. Ibatis is very simple and easy to learn, hibernate is relatively complex, the threshold is higher.
2. Both are relatively good open source products.
3. When the system belongs to two development, unable to control and modify the database structure, the flexibility of ibatis will be more suitable than hibernate
4. The huge amount of system data processing and the extremely stringent performance requirements often mean that we have to achieve system performance design metrics through highly optimized SQL statements (or stored procedures). In this case, the Ibatis will have better controllability and performance.
5. Ibatis requires a handwritten SQL statement, or a part of it, hibernate can be generated automatically, and occasionally some hql will be written. The same demand, ibatis a lot more work than hibernate. Similarly, if there are modifications to the database fields, there are few changes to hibernate, and ibatis to change those SQL mapping places one by one.
6. In the database field one by one corresponding to the mapping of the PO and Hibernte This object-based mapping of the PO is very different, the essential difference is that the PO is flat, unlike Hibernate map Po is able to express three-dimensional object inheritance, aggregation and so on, This will directly affect the design of your entire software system.
7. Hibernate is now the mainstream O/R mapping framework, from the richness of documents, product integrity, version development speed is stronger than Ibatis.

Analysis on the principle of----ibatis

For Ibatis to work in fact, if you use Ibatis,ibatis in the background it is also run these same JDBC code. Ibatis Gets the database connection, sets the parameters of the SQL statement, executes the SQL statement, gets the result of the execution, and closes all the resources at the end. However, the amount of code you need to write yourself is greatly reduced. Code Listing 2-3 shows the code you need to write when you run the same SQL statement using Ibatis.

2.1.iBATIS working principle for small and simple systems

Small applications typically involve only a single database, with some fairly simple user interfaces and domain models. Its business logic layer is very simple, and sometimes there is no business logic at all for simple applications that only involve adding and deleting (crud:create, Read, Update, Delete) operations. Ibatis is a great fit for small applications for 3 reasons.

First, Ibatis himself is very small and simple. It does not require a server or any other type of middleware. No additional infrastructure (infrastructure) is required. Ibatis also has no third-party dependencies. The simplest installation of Ibatis only requires 2 jar files, which is a total of 375KB. In addition to configuring your SQL mapping file, Ibatis does not require any installation, so it takes only a few minutes for you to have a working persistence layer.

Second, Ibatis does not impose any impact on the design or database structure of existing applications. So, if you have a small system that has been partially implemented or even released, you can still refactor your persistence layer to use Ibatis, which is very simple. Because Ibatis is simple, it doesn't make the architecture of your application overly complex. If you use the object/relational mapping tool or the code generation tool, because they have some assumptions about the application and the design of the database, they are not likely to have no impact on the application's architecture.

Finally, as long as you have experience in software development, you will not doubt that any small software will almost inevitably grow into a big software one day. All successful software has a tendency to grow further. That's a good thing, and what we're going to say next is that Ibatis is also great for large systems, and it can even scale to meet the needs of enterprise-class applications.

2.2 Ibatis working principle for large, enterprise-class systems

Ibatis is designed for enterprise-class applications. Most importantly, Ibatis has a number of advantages over other solutions in this field. The original creator of Ibatis only developed experience from large applications to enterprise application systems. This type of system usually involves more than one database, and all of these databases are not controllable. In the first chapter we discuss various types of databases, including enterprise-level databases, private databases, and legacy databases. An important reason for the author to create the Ibatis framework is to target such a database. As a result, Ibatis has many features that make it ideal for enterprise application environments.

In fact, the first reason why Ibatis is suitable for large-scale systems we've already said that, but this is really important, so we'd like to stress again: Ibatis doesn't make any assumptions about your database model and the design of the object model. No matter how mismatched these two models are in your application, Ibatis can be applied. Further, Ibatis does not make any assumptions about the architecture of your enterprise applications. Whether you're vertically dividing your database into business functions or horizontally, Ibatis allows you to efficiently process data and integrate it into your object-oriented applications.

2nd, some of the features of Ibatis make it possible to process large datasets very efficiently. The Ibatis supported line processor (row handler) allows it to batch a very large recordset, one record at a time. Ibatis also supports getting only a range of results, which allows you to get only the data you need right now. For example, suppose you get 10,000 records, and you actually only need the NO. 500 to No. 600 one, then you can simply get those records. Ibatis supports driver prompts to make such operations very efficient.

Finally, Ibatis allows you to build a mapping relationship from object to database in a number of ways. It is very rare for an enterprise-class system to work in only one mode. Many enterprise-class systems need to perform transactional work during the day, while batch work is performed at night. Ibatis allows you to map the same class in a variety of ways to ensure that each job is executed in the most efficient manner. Ibatis also supports a variety of data acquisition strategies. For example, you can choose to lazy-load some data, or you can load a complex object graph with just one union query SQL statement to avoid serious performance problems.

These seem to be self-selling. So, now that we have entered this state, why not go into the reasons why you need to use Ibatis? We'll do it in verse 2.3. And for the sake of fairness, in section 2.4, we'll discuss some situations where you should not use Ibatis.

--------Hibernate cache (transferred from: http://blog.csdn.net/woshichenxu/article/details/586361)

1. Questions about Hibernate cache: 1.1.1. Basic caching Principles

Hibernate cache is divided into two levels, the first level is stored in the session called a cache, default and cannot be uninstalled.

The second level is the process-level cache controlled by Sessionfactory. is a globally shared cache, and any query method that calls a level two cache will benefit from it. Only the correct configuration of the level two cache will work. You must also use the appropriate method to fetch data from the cache when you make a conditional query. such as the Query.iterate () method, load, get method, and so on. It is important to note that the Session.find method always fetches data from the database and does not fetch data from the level two cache, even if it has the data it needs.

The implementation of using the cache at query time is: First query the primary cache for the required data, if not, query the level two cache, if there is no level two cache, then perform the query database work. Note that the query speed in these 3 ways is reduced in turn.

Because the lifetime of the session is often very short, the first level of the cache that exists within the session is of course very short, so the hit rate of the first level cache is very low. The improvement of the system performance is also very limited. Of course, the main function of this session internal cache is to keep the session internal data state synchronized. Not hibernate is provided to significantly improve the performance of the system.

In order to improve the performance of hibernate, in addition to some common methods that require attention such as:

With lazy loading, urgent external connections, query filtering, and so on, you also need to configure Hibernate's level two cache. The improvement of the overall performance of the system often has an immediate effect!

(after the experience of their previous projects, will generally have a performance improvement of more than one-fold)

1.2.2. Issues with n+1 queries

When you perform a conditional query, the iterate () method has a well-known "n+1" query, which means that the iterate method executes the query with the query results that meet the criteria one more time (n+1) at the first query. However, this problem only exists at the first query, and performance can be greatly improved when executing the same query later. This method is suitable for querying business data with a large amount of data.

However, note: When the amount of data is particularly large (such as pipeline data, etc.) need to configure their specific cache policy for this persistence object, such as setting its maximum number of records in the cache, the time of the cache, and so on, to avoid the system to load a large amount of data in memory and the memory resource is quickly exhausted. Instead, it lowers the performance of the system!!!

1.3. Additional considerations for using Hibernate level two cache: 1.3.1. About the validity of the data

In addition, hibernate maintains the data in the level two cache to ensure consistency between the data in the cache and the real data in the database! Whenever you invoke the Save (), update (), or Saveorupdate () method to pass an object, or use the load (), get (), list (), iterate (), or scroll () method to obtain an object, the The object will be added to the internal cache of the session. When the flush () method is subsequently called, the state of the object is synchronized with the database.

This means that the cache is updated while the data is deleted, updated, and incremented. Of course this also includes level two cache!

Just call the Hibernate API to perform database-related work. Hibernate will automatically guarantee the validity of your cached data!!

However, if you use JDBC to bypass hibernate directly perform operations on the database. At this point, hibernate will not/also not be able to perceive the changes made to the database, it can no longer guarantee the validity of the data in the cache!!

This is also a common problem with all ORM products. Fortunately, hibernate exposes us to the cache removal method, which gives us an opportunity to manually guarantee data availability!!

First-level cache, the level two cache has a corresponding cleanup method.

Where the two-level cache provides the cleanup method:

Empty cache by object class

Empties the cache by object class and the primary key ID of the object

Empties the cached data in the collection of objects, and so on.

1.3.2. Suitable use case

Not all cases are appropriate for use with level two caching, which needs to be determined according to the circumstances. You can also configure their specific caching policies for a persisted object.

Suitable for use with level two caching:

1, the data will not be modified by third parties;

In general, data that is modified outside of hibernate is best not to configure a level two cache to avoid causing inconsistent data. However, if this data needs to be cached for performance reasons, and it may be modified by 3rd parties such as SQL, you can also configure a level two cache for it. It's just that you need to manually call the cache's Purge method after the SQL execution changes. To ensure consistency of data

2, the data size within the acceptable range;

If the data table has a particularly large amount of data, this is not appropriate for level two caching. The reason is that the amount of cached data is too large to cause memory resources to be strained, but degrades performance.

If the data table data volume is particularly large, but often used is only a relatively new part of the data. At this point, you can also configure a level two cache for it. However, the caching policies of their persisted classes must be configured separately, such as the maximum cache count, cache expiration time, and so on, to reduce these parameters to a reasonable range (too high will cause memory resource tension, too low for cache significance).

3, the Data update frequency is low;

For data with data updates that are too frequent, the cost of frequently synchronizing the data in the cache can be comparable to the benefit from the data in the query cache, offsetting the disadvantage benefits. At this point the cache does not have much meaning.

4, non-critical data (not financial data, etc.)

Financial data, etc. is a very important data, it is absolutely not allowed to appear or use invalid data, so it is best not to use the level two cache for security reasons.

Because the importance of "correctness" is much greater than the importance of "high performance" at this time.

2. Recommendation 1.4 that the Hibernate cache is currently used in the system. Current situation

There are three scenarios in a general system that bypass hibernate to perform database operations:

1. Multiple Application Systems access a database at the same time

In this case, using Hibernate level two caches will inevitably result in inconsistent data,

A detailed design is needed at this time. For example, in the design of the same data table to avoid simultaneous write operations,

Use the various levels of database locking mechanisms, and so on.

2, dynamic table related

The so-called "dynamic table" refers to the data table that is automatically established according to the user's operating system when the system is running.

For example, "custom forms," such as the user custom extension development of the nature of the function module, because the data table is established at runtime, so hibernate can not be mapped. Therefore, it is only possible to manipulate the direct database JDBC operation bypassing hibernate.

If the data in the dynamic table is not designed for caching, there is no problem with data inconsistency.

If you design your own caching mechanism at this point, you can call your own cache synchronization method.

3. When using SQL to bulk delete Hibernate persisted object table

After you perform a bulk delete, the data that has been deleted is present in the cache.

Analysis:

When 3rd is executed (SQL bulk Delete), subsequent queries may only be in the following three ways:

A. Session.find () Method:

According to the previous summary, the Find method does not query the data of level two cache, but instead queries the database directly.

So there is no problem with data validity.

B. When calling the iterate method to perform a conditional query:

Depending on how the iterate query method executes, each time it queries the database for the ID value that satisfies the condition, and then gets the data from that ID to the cache, the database query is executed when the data in the cache does not have this ID;

If this record has been deleted directly by SQL, iterate does not query this ID when it executes the ID query. Therefore, even if the cache has this record is not obtained by the customer, there is no inconsistency. (This condition has been tested and verified)

C. Execute the query by ID with the GET or Load method:

Objectively, the data that is already out of date will be queried. But because the system performs SQL bulk deletion is generally

For intermediate relational data tables, for

The query of the Intermediate association table is usually using conditional query, the probability of querying an association relation by ID is very low, so this problem also does not exist!

If a value object does need to query an association by ID, and because the volume of data is large, SQL is used to perform bulk deletion. When these two conditions are met, in order to ensure that the query by ID gets the correct results, you can use the method of manually clear the data of this object in level two cache!!

(This situation is less likely to occur)

1.5. Recommendations

1, it is recommended not to use SQL directly to perform data persistence object data Update, but you can perform bulk delete. (There are fewer places in the system that require batch updates)

2. If you must use SQL to perform data updates, you must empty the cached data for this object. Call

Sessionfactory.evict (Class)

Sessionfactory.evict (Class,id)

and other methods.

3, in the bulk delete data volume can be directly used Hibernate batch delete, so there is no bypass hibernate to execute SQL generated by the cache data consistency problem.

4. It is not recommended to use Hibernate's bulk deletion method to delete large quantities of recorded data.

The reason is that the bulk delete of Hibernate executes 1 query statements plus n DELETE statements that satisfy the condition. Instead of executing one conditional DELETE statement at a time!!

When there is a lot of data to be deleted there will be a great performance bottleneck!!! If the volume of deleted data is large, such as more than 50, it can be directly removed by JDBC. The benefit of this is that only one SQL DELETE statement is executed, and performance can be greatly improved. At the same time, the cache data synchronization problem, can use hibernate to clear the level two cache related data method.

Call Sessionfactory.evict (Class), Sessionfactory.evict (Class,id), and so on.

So, for the general application system development (not related to clustering, distributed data synchronization problem, etc.), because only in the intermediate correlation table to perform bulk deletion when the SQL execution is called, and the Intermediate association table is generally the execution condition query is not likely to perform the query by ID. Therefore, you can execute the SQL delete directly at this time, and you do not even need to call the cache cleanup method. Doing so does not result in a later configuration of a level two cache that causes data validation.

Step back, even if you do call the method of querying an intermediate table object by ID, you can resolve it by calling the clear cache method.

4, the specific configuration method

According to many of the Hibernate users I know, it is superstitious to believe that "hibernate will handle the performance problem for us" or "hibernate will automatically call the cache for all of our operations." The actual situation is that Hibernate provides us with a good caching mechanism and extended cache framework support, but must be properly called before it is possible to play a role!! So many of the performance problems with Hibernate systems are not actually hibernate or bad, but because the user is not properly aware of how they are used. Conversely, if configured correctly, hibernate performance will make you quite "pleasantly surprised" to discover. Below I explain the specific configuration method.

The ibernate provides a level two cache interface:
Net.sf.hibernate.cache.Provider,
It also provides a default implementation of the Net.sf.hibernate.cache.HashtableCacheProvider,
Other implementations, such as Ehcache,jbosscache, can also be configured.

The specific configuration location is in the Hibernate.cfg.xml file
<property name= "Hibernate.cache.use_query_cache" >true</property>
<property name= "Hibernate.cache.provider_class" >net.sf.hibernate.cache.hashtablecacheprovider</ Property>

Many hibernate users are configured to this step and think they're done.
Note: In fact, the light is so well-equipped that there is no two-level cache using Hibernate. At the same time, because most of the time they use Hibernate to close the session, the first level cache does not play any role. The result is that no cache is used, and all hibernate operations are directly operational databases!! Performance can be imagined.

The correct approach is to configure each Vo object's specific cache policy in addition to the above configuration in the mapping file. For example:

<class name= "Com.sobey.sbm.model.entitySystem.vo.DataTypeVO" table= "Dcm_datatype" >
<cache usage= "Read-write"/>
<id name= "id" column= "TYPEID" type= "Java.lang.Long" >
<generator class= "sequence"/>
</id>

<property name= "name" column= "name" type= "java.lang.String"/>
<property name= "DbType" column= "DbType" type= "java.lang.String"/>
</class>


The key is this <cache usage= "Read-write"/>, which has several options
Read-only,read-write,transactional, et
Then, when executing the query, note that if it is a conditional query, or a query that returns all results, the Session.find () method does not get the data in the cache at this time. The cached data is only tuned when the Query.iterate () method is called.

Both the Get and load methods will query the data in the cache.

There are different configuration methods for different cache frameworks, but the configuration is generally the above

(in addition, for the support of transactional, and the configuration of the cluster-enabled environment, I will strive to publish in subsequent articles)

3. Summary

In short, according to different business conditions and project conditions for hibernate to effectively configure and correct use, weaknesses. There is no one "omnipotent" scenario that is appropriate for any situation.

These conclusions and recommendations are based on your own test results in Hibernate 2.1.2 and previous project experience. If there is a fallacy, please make a point:)!

Talk about the difference between Hibernate and Ibatis , which performance will be higher

A: 1,Hibernate is biased towards the object of the operation to achieve the purpose of database-related operations, and ibatis more in favor of SQL The optimization of the statement.

2,Hibernate 's use of the query statement is its own hql, and ibatis is the standard SQL statement.

3,Hibernate is relatively complex, difficult to learn;ibatis similar to SQL statements, easy to learn.

Performance aspects:

1 If the system has a huge amount of data processing and the performance requirements are extremely demanding, you often need to manually write high-performance sql statement or the stored error procedure, at this time ibatis Span style= "font-family: the song Body;" > better controllability, so performance is better than hibernate .

2, the same demand, because hibernate can automatically generate hql statements, and ibatis need to manually write SQL statement, at this point, use Hibernate efficiency is higher than Ibatis .

Comparison between Hibernate and Ibatis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.