Hibernate is an open source object-relational mapping framework that provides JDBC with a very lightweight object encapsulation that maps Pojo to database tables, is a fully automated ORM framework, and Hibernate automatically generates SQL statements, automatically executes, So that Java programmers can use object programming thinking to manipulate the database at will. Hibernate can be applied to any JDBC application, both in Java client applications and in servlet/jsp Web applications, and most revolutionary of all, hibernate can replace CMP in the EE architecture of the EJB application. The task of achieving data persistence.
Core interfaces and classesEditThere are 6 core classes and interfaces for hibernate: Session, Sessionfactory, Transaction, Query, criteria, and configuration. These 6 core classes and interfaces will be used in any development. Through these interfaces, you can not only access persistent objects, but also enable transaction control. These 6 core interfaces and classes are described separately below. The Sessionsession interface is responsible for performing crud operations on persisted objects (the task of crud is to complete the communication with the database, including many common SQL statements). However, it is important to note that the session object is non-thread-safe. At the same time, Hibernate's session differs from the HttpSession in JSP applications. When the term "session" is used, it is actually referring to the session in Hibernate, and later the HttpSession object is called the user session. The Sessionfactorysessionfactory interface is responsible for initializing hibernate. It acts as a proxy for the data storage source and is responsible for creating session objects. The factory model is used here. It is important to note that Sessionfactory is not lightweight, because in general, a project typically requires only one sessionfactory, and when you need to manipulate multiple databases, you can specify a sessionfactory for each database. The Transactiontransaction interface is an optional API that allows you to opt out of using this interface instead of the underlying transaction code written by Hibernate's designers themselves. The Transaction interface is an abstraction of the actual transaction implementation, which includes JDBC transactions, usertransaction in JTA, and even CORBA transactions. This design is designed to enable developers to use a single transactional interface, allowing their projects to be easily ported between different environments and containers. The Queryquery interface makes it easy to query the database and persistent objects, which can be expressed in two ways: the HQL language or the SQL statement of the local database. Queries are often used to bind query parameters, limit the number of query records, and ultimately perform query operations. The Criteriacriteria interface is very similar to the query interface, allowing you to create and execute object-oriented, standardized queries. It is important to note that the criteria interface is also lightweight and cannot be used outside the session. The role of the Configurationconfiguration class is to configure Hibernate and toit to start. During Hibernate startup, an instance of the configuration class first locates the location of the mapped document, reads the configuration, and then creates a Sessionfactory object. Although the configuration class plays only a small role throughout the Hibernate project, it is the first object encountered when starting hibernate.
Introduction to Primary keysEditThe Assignedassigned method generates a primary key value by the user, and it is specified before save () to throw an exception feature: the generated value of the primary key is entirely up to the user, regardless of the underlying database. The user needs to maintain the primary key value and specify the primary key value before calling Session.save (). Hilohilo uses a high-low algorithm to generate a primary key, and the low-and-middle algorithm uses a high value and a low value,Hibernate related BooksThe two values of the algorithm are then stitched together as the unique primary key in the database. The Hilo approach requires additional database tables and fields to provide high-level value sources. The table used by default is Hibernate_unique_key, and the default field is called Next_hi. Next_hi must have a record otherwise an error will occur. Features: Additional database table support is required to guarantee the uniqueness of the primary key in the same database, but it does not guarantee the uniqueness of the primary key between multiple databases. The Hilo primary key generation method is maintained by hibernate, so Hilo is not independent of the underlying database, but should not manually modify the values of the tables used by the Hi/lo algorithm, or it will cause duplicate exceptions for the primary key. The Incrementincrement method generates a new primary key value by automatically increasing the primary key value, but requires that the primary key type of the underlying database be a numeric type such as Long,int. The primary key is incremented in numerical order, and the increment is 1. /* Features: Maintained by hibernate itself, suitable for all databases, not suitable for multi-process concurrent update database, suitable for a single process to access the database. cannot be used in a clustered environment. The */identityidentity approach supports automatic growth based on the underlying database, with different databasesHibernate related BooksThe same primary key growth pattern. Features: Related to the underlying database, requires the database to support identity, such as MySQL is auto_increment, SQL Server is identity, the supported databases are MySQL, SQL Server, DB2, Sybase and Hypersonicsql. The identity does not require hibernate and user intervention, it is easier to use, but it is not easy to migrate programs between different databases. Sequencesequence requires the underlying database to support sequence, such as Oracle database, which requires a support sequence for the underlying database, a database that supports sequences with DB2, POSTGRESQL, Oracle, Sapdb, such as porting programs between different databases, especially from databases that support sequences to databases that do not support sequences, need to modify the configuration file. The Nativenative primary key generation method automatically chooses the identity, Sequence, and Hilo primary key generation methods according to different underlying databases: different primary key generation methods are used according to different underlying databases. Because Hibernate uses different mapping methods based on the underlying database, it is easy to migrate programs, which can be used if multiple databases are used in a project. Uuiduuid uses the 128-bit UUID algorithm to generate the primary key, which can guarantee the uniqueness of the primary key under the network environment, and can guarantee the uniqueness of the primary key under different databases and different servers. Features: To ensure the uniqueness of the primary key in the database, the generated primary key occupies more storage space foreign guidforeign used in a one-to-one relationship. The GUID primary key generation method uses a special algorithm that guarantees the uniqueness of the generated primary key, supporting SQL Server and Mysql hibernate working principle: 1, through the configuration (). Configure (); Read and parse the Hibernate.cfg.xml configuration file. 2, the <mappingresource= "Com/xx/user.hbm.xml"/> Read the parse mapping information in Hibernate.cfg.xml. 3, through Config.buildsessionfactory ();//Get sessionfactory. 4, Sessionfactory.opensession ();//Get session. 5, Session.begintransaction ();//Open transaction. 6, persistent operate;7, session.gettransaction (). commit ();//Commit TRANSACTION 8, close session;9, turn off sessionfactory;hibernate advantages: 1, the encapsulation of JDBC, simplifying a lot of repetitive code. 2, simplifies the DAO layer coding work, makes the development more object. 3, good transplant, support a variety of databases, if you change a database as long as the configuration file configuration can be changed, do not change hibernate code. 4, support transparent persistence, because hibernate operation is pure (Pojo) Java class, did not implement any interface, no intrusion. So it's a lightweight framework. Hibernate lazy Loading: Get does not support lazy loading, load supports lazy loading. 1. Hibernate2 deferred loading for entity objects and collections 2, Hibernate3 deferred loading for properties hibernate lazy loading is when using Session.load (user.class,1) or Session.createquery () when querying an object or property, this object or property is not in memory, only when the program operation data, it will exist in memory, so that delay loading, saving the memory cost, thereby improving the performance of the server. Hibernate cache Mechanism Cache: session-level caching is also called transaction-level caching, caching entities only, the lifecycle and session consistency. It cannot be managed. No calls are displayed. Second level cache: Sessionfactory cache, also called process level cache, implemented using 3rd party plug-ins, also value cache entities, lifecycle and sessionfactory consistent, can be managed. First configure the 3rd party plug-in, we use Ehcache, in the Hibernate.cfg.xml file added <propertyname= "Hibernate.cache.user_second_level_cache" > True</property> the call to be displayed in the map, <cacheusage= the query cache for the "read-only"/> Level Two cache: caches the normal properties. If the associated table has been modified, the lifecycle of the query cache is also over. The query cache must be enabled manually in the program: Query.setcacheable (TRUE); optimize Hibernate1, use a one-to-many bidirectional association, and try to maintain it from more than one end. 2, do not use a one-on, try to use a multi-pair. 3. Configure the object cache and do not use the collection cache. 4, table fields to be less, tables associated not afraid of more, there are two levels of cache backing. The relationship between Hibernate class and class association relationship inheritance Relationship Hibernate inheritanceThe relational mapping strategy is divided into three types: one table corresponds to an entire class inheritance tree, one class corresponds to a table, and one table for each specific class. Cache Management edit Hibernate provides a level two cache (cache), and the first level of cache is session-level caching, which is a transaction-scoped cache. This level of caching is managed by Hibernate, and generally does not require intervention; the second level of caching is the cache at the sessionfactory level, which is a process-wide or cluster-wide cache. This level of caching can be configured and changed, and can be dynamically loaded and unloaded. Hibernate also provides a query cache for query results, which relies on a second-level cache. First-level cache the second-level cache holds the data in the form of a scoped transaction scope for the bulk data cache of the persisted object, each transaction has a separate first-level cache process scope or cluster scope, and the cache is shared by all transactions within the same process or cluster for concurrent access policies because each transaction has a separate first-level cache , there is no concurrency problem and there is no need to provide concurrent access policies because multiple transactions access the same data in the second level cache at the same time, the appropriate concurrency access policy must be provided to ensure that a specific transaction isolation level data expiration policy does not provide a data expiration policy. Objects in the first-level cache never expire unless the application explicitly empties the cache or clears the specific object must provide a data expiration policy, such as the maximum number of objects in the memory-based cache, which allows the object to be in cache for the longest time, and the maximum idle time allowed for the object to be in the cache. Bulk data for physical storage media memory and hard disk objects are first stored in memory-based caches, and when the number of objects in memory reaches the limit specified in the data expiration policy, the rest of the objects are written to the hard disk-based cache. Cached software implementations include the implementation of the cache in hibernate session provided by a third party, hibernate only provides a cache adapter (Cacheprovider). Used to integrate specific cache plugins into hibernate. How caching is enabled as long as the application performs the operations of saving, updating, deleting, loading, and querying database data through the session interface, Hibernate enables the first level cache, copies the data in the database to the cache as objects, and for bulk updates and bulk deletions, If you do not want to enable the first level of caching, you can bypass the Hibernate API and execute the instructions directly through the JDBC API. Users can configure a second-level cache on the granularity of a single collection of classes or classes. If an instance of a class is read frequently but is seldom modified, consider using a second-level cache. Only a second-level cache is configured for a class or collection, and hibernate joins its instance to the second-level cache at run time. How the user manages the cache the first-level cache of physical media is memory, and due to limited memory capacity, it must be limited by appropriate retrieval policies and retrieval methodsThe number of loaded objects. The Evit () method of the session can explicitly empty the cache for specific objects, but this method is not recommended. The second-level cache of physical media can be memory and hard disks, so the second-level cache can hold a large amount of data, and the Maxelementsinmemory property value of the data expiration policy can control the number of objects in memory. Managing a second-level cache consists of two main areas: Select the persistence class that needs to use the second level cache, set the appropriate concurrency access policy: Select the cache adapter, and set the appropriate data expiration policy. Primary cache when the application calls save (), update (), Saveorupdate (), get () or load () of the session, and the list (), iterate (), or filter () method that invokes the query interface, If the corresponding object does not exist in the session cache, hibernate will add the object to the first level cache. When you clean up the cache, hibernate synchronizes updates to the database based on the state of the objects in the cache. Session provides two ways for the application to manage the cache: evict (object obj): Clears the persisted object specified by the parameter from the cache. Clear (): Empties all persisted objects in the cache. Secondary cache 3.1. Hibernate's two-level caching strategy is typically as follows: 1) When a condition is queried, always issue a SELECT * FROM table_name where .... (select all fields) such SQL statements query the database and get all the data objects at once. 2) Put all the obtained data objects into the second level cache based on the ID. 3) When hibernate accesses the data object according to the ID, it is first checked from the session cache, and if the level two cache is configured, it is checked from the level two cache, and the database is not found, and the result is placed in the cache by ID. 4) Update the cache while deleting, updating, and adding data. Hibernate's level two cache policy, which is a cache policy for ID queries, has no effect on conditional queries. To do this, Hibernate provides query Cache for conditional queries. 3.2. What kind of data is suitable for storage in the second level cache? 1 rarely modified data 2 is not very important data, allowing the occasional concurrent data 3 will not be accessed concurrently data 4 reference data, refers to the supply of constant data reference, its limited number of instances, its instances will be referenced by many other instances of the class, the instance is rarely or never modified. 3.3. Not suitable for storing data in a second level cache? 1 frequently modified data 2 financial data, which is absolutely not allowed to occur concurrently with 3 of data shared with other applications. 3.4. Common cache plug-in hibernater's level two cache is a plug-in, the following are a few common cache plug-ins: l EhCache: Can be used as a process-wide cache, the physical media that holds the data can be memory or hard disk, which provides support for hibernate query caching. L Oscache: As a process-wide cache, the physical media that holds the data can be either memory or hard disk, providing a rich cache data expiration policy that supports Hibernate's query caching. L Swarmcache: Can be used as a cluster-wide cache, but does not support Hibernate's query cache. L jbosscache: Can be used as a cluster-wide cache, supports transactional concurrency access policies, and provides support for hibernate query caching. The comparison of the above 4 cache plugins is shown in table 9-3. Table 9-3 Comparison of 4 types of cache plugins
Cache Plugin |
Supports read-only |
Support for non-strict read and write |
Support Read/write |
Support Transactions |
EhCache |
Is |
Is |
Is |
|
Oscache |
Is |
Is |
Is |
|
Swarmcache |
Is |
Is |
|
|
JBossCache |
Is |
|
|
Is |
Their providers are listed in table 9-4. Table 9-4 provider of cache policies
Cache Plugin |
Provider (Cache Providers) |
Hashtable (used only when testing) |
Org.hibernate.cache.HashtableCacheProvider |
EhCache |
Org.hibernate.cache.EhCacheProvider |
Oscache |
Org.hibernate.cache.OSCacheProvider |
By default, Hibernate uses Ehcache for JVM-level caching. Users can specify additional caching policies by setting the properties of the Hibernate.cache.provider_class in the Hibernate configuration file. The cache policy must implement the Org.hibernate.cache.CacheProvider interface. 3.5. Primary steps for configuring level two caching: 1) Select the persistence class that requires a two-level cache, and set its concurrent access policy for the named cache. This is the most serious step to consider. 2) Select the appropriate cache plug-in, and then edit the plugin's configuration file. Lazy LoadingEditLazy-Loading Hibernate object-relational mappings provide deferred and non-deferred object initialization. Non-lazy loading reads an object in conjunction with all other objects related to the object. This can sometimes result in hundreds of (if not thousands) of SELECT statements being executed when the object is read. This problem sometimes occurs when a bidirectional relationship is used, which often results in the entire database being read at the initialization stage. Of course, you can take the trouble to check each object's relationship with other objects and put those most expensive deletions, but in the end, we might lose the convenience that we wanted to get in the ORM tool. An obvious workaround is to use the lazy load mechanism provided by Hibernate. This initialization policy reads the relational object only when one of the objects calls it for a pair of many or many-to-many relationships. This process is transparent to the developer, and there are only a few requests for database operations, resulting in significant performance gains. One drawback of this technique is that a lazy-loading technique requires that a hibernate session be kept open while the object is in use. This becomes a major problem when abstracting the persistence layer by using the DAO pattern. In order to completely abstract the persistence mechanism, all database logic, including opening or closing sessions, cannot occur at the application level. Most commonly, some DAO implementation classes that implement simple interfaces encapsulate the database logic completely. A quick but clumsy solution is to discard the DAO pattern and add the database connection logic to the application layer. This may be useful for small applications, but in large systems, this is a serious design flaw that hinders the scalability of the system. Web-tier lazy loading Fortunately, the Spring Framework provides a hibernate-related book for hibernate lazy loading and the integration of DAO patterns(10 photos)A convenient way to solve the problem. As an example of a Web application, Spring provides Opensessioninviewfilter and Opensessioninviewinterceptor. We are free to choose a class to implement the same functionality. The only difference between the two approaches is that interceptor runs in the spring container and is configured in the context of the Web application, and the filter runs before spring and is configured in XML. Whichever they are, they are requesting the hibernate session to be opened with the current session bound to the current (database) thread. Once bound to a thread, this open hibernate session can be used transparently in the DAO implementation class. This session remains open for a view that delays loading a value object in the database. Once this logical view is complete, the hibernate session is closed in the Dofilter method of the filter or the Posthandle method of the Interceptor. Implementation method add <filter><filter-name>hibernateFilter</filter-name><filter-class> in Web. xml org.springframework.orm.hibernate3.support.opensessioninviewfilter</filter-class></filter>< filter-mapping><filter-name>hibernatefilter</filter-name><url-pattern>*.do</ Url-pattern></filter-mapping> Performance OptimizationEditHibernate people may have encountered performance problems, to achieve the same function, hibernate and JDBC performance difference more than 10 times times is normal, if not adjusted early, it is likely to affect the overall progress of the project. In general, the main considerations for Hibernate performance tuning are as follows:. Database design tuning. HQL optimization. The correct use of the API (such as selecting different collections and query APIs based on different business types). Main configuration parameters (log, query cache, fetch_size, Batch_ Size, etc.). map file optimization (ID generation policy, Level two cache, deferred load, association optimization). Management of first-level caches. There are many unique strategies for level two caching. Transaction control policies. Database design a) reduce the complexity of association B) the generation mechanism that does not use the Federated Primary key C) ID as much as possible, the mechanisms provided by different databases are not exactly the same. d) appropriate redundant data, not overly pursuing high-paradigm hql optimization hql If you throw it away from some of the cache mechanisms of hibernate itself, HQL's optimization techniques, like normal SQL optimization techniques, make it easy to find some experience on the web. master configuration A) query caching, unlike the following cache, which is a cache for HQL statements that can take advantage of cached data when the exact same statement executes again. However, query caching can be counterproductive in a trading system (where data changes frequently and the odds of querying the same conditions are not large): It can cost a lot of system resources but is difficult to come in handy. b) Fetch_size, similar to the relevant parameters of JDBC, the parameter is not larger the better, but should be based on business characteristics to set C) Batch_size ibid. D) in the production system, remember to turn off the SQL statement printing. Cache a) database-level caching: This level of caching is most efficient and secure, but different levels of database manageability are not the same, for example, in Oracle, you can specify that the entire table is placed in the cache when you build the table. b) Session cache: Effective in a hibernatesession, this level of caching is not strong, mostly in hibernate auto-management, but it provides a way to clear the cache, which is effective in large-volume increase/update operations. For example, adding 100,000 records at the same time, in the usual way, is likely to find Outofmemeroy exceptions, and you may need to manually clear this level of caching: Session.evict and SESSION.CLEARC) application cache: Effective in a sessionfactory, so it is also the priority of optimization, therefore, a variety of strategies are also considered more, before the data into this level of cache, there are some prerequisites to consider: I. Data is not modified by a third party (for example, is there another application that is modifying the data?) II. The data is not too bigIII. Data is not updated frequently (otherwise using the cache may backfire) Iv. Data is frequently queried for V. Data is not a key data (such as money, security, etc.). There are several forms of caching that can be configured in the mapping file: read-only (read-only, suitable for infrequently changed static data/historical data), Nonstrict-read-write,read-write (more general form, efficiency), transactional (in JTA, with fewer cache products) d) distributed cache: Same as C) configuration, except that cache products are selected differently, and most projects of Oscache, JBoss Cache, are conservative in their use of clusters (especially critical trading systems). In a clustered environment, using only database-level caches is the safest. Lazy load a) entity lazy loading: By using dynamic proxy implementation B) Set lazy Load: By implementing its own Set/list,hibernate provides this support C) property Lazy Loading: Method selection A) to do the same thing, Hibernate offers some options , however, there may be performance/code implications. Display, returning 100,000 records at a time (list/set/bag/map, etc.) for processing, is likely to cause a problem with insufficient memory, and if you are using a cursor-based (scrollableresults) or iterator result set, there is no such problem. b) Session of the Load/get method, the former will use a level two cache, while the latter is not used. c) Query and List/iterator, if you look at them carefully, you may find a lot of interesting situations, the main difference (if spring is used, the Find,iterator method in Hibernatetemplate): I. The list can only take advantage of the query cache (but not in the transaction system) and cannot take advantage of a single entity in the level two cache, but the list-isolated object is written to a level two cache, but it typically generates fewer execution SQL statements, in many cases one (no association). II. Iterator can take advantage of a level two cache, for a query statement, it will first find all eligible records from the database ID, and then through the ID to cache to find, the cache does not have records, and then construct the statement from the database, so it is easy to know, If there are no qualifying records in the cache, using iterator yields a n+1 SQL statement (n is the number of records that match the criteria) III. With the iterator, with the cache management API, in the massive data query can be a good solution to memory problems, such as: while (It.hasNext ()) {Youobject object = (youobject) it.next (); session.evict (Youobject); Sessionfactory.evice (Youobject.class, Youobject.getid ());} If you use the list method, you are likely to have a outofmemory error. The collection is selected in the Hibernate3.1 document "19.5. Understanding Collection Performance "is described in detail. The main performance impact of transaction control transactions include: choice of transaction Mode, transaction isolation level and lock selection a) Transaction mode selection: If you do not involve multiple transaction manager transactions, you do not need to use JTA, only the JDBC transaction control. b) Transaction ISOLATION LEVEL: see Standard SQL Transaction ISOLATION level C) Lock selection: Pessimistic locking (typically implemented by a specific transaction manager), low efficiency for long transactions, but secure. Optimistic locks (typically implemented at the application level), such as the version field can be defined in hibernate, it is clear that optimistic locks will fail if there are multiple application manipulation data and these applications are not using the same optimistic locking mechanism. Therefore, there should be different strategies for different data, as in many cases, we find a balance between efficiency and security/accuracy, in any case, optimization is not a purely technical problem, you should have enough knowledge of your application and business characteristics. Batch operations even with JDBC, there is a significant difference in the efficiency of batch and non-use of batch data updates. You can set the Batch_size to support bulk operations. For example, to bulk delete objects in a table, such as "Delete account", the statement, hibernate find out the ID of all the account, and then delete, this is mainly to maintain the level two cache, so the efficiency is not high, in the subsequent version added bulk Delete/update, but this also does not solve the maintenance problem of the cache. In other words, Hibernate's bulk operation efficiency is not as satisfactory due to the maintenance problem of level two cache. Hibernate working principle: 1, through the configuration (). Configure (); Read and parse the Hibernate.cfg.xml configuration file. 2, the <mappingresource= "Com/xx/user.hbm.xml"/> Read the parse mapping information in Hibernate.cfg.xml. 3, through the Config.buildsessionFactory ();//Get sessionfactory. 4, Sessionfactory.opensession ();//Get session. 5, Session.begintransaction ();//Open transaction. 6, persistent operate;7, session.gettransaction (). commit ();//COMMIT TRANSACTION 8, close session;9, close sessionfactory;
Hibernate (Object Relational Mapping framework for open source)