Oracle JDBC Memory Management--oracle White Paper August 2009 ____oracle

Source: Internet
Author: User
Tags memory usage oracleconnection xms
Original: http://www.oracle.com/technetwork/database/enterprise-edition/memory.pdf Oracle JDBC Memory Management-Oracle White Paper August 2009 ____ Introduction to Oracle

The Oracle JDBC driver may use a large amount of memory. This is a conscious design choice that makes trade-offs before using large amounts of memory versus improving performance. In most cases, this has proven to be a good choice for most users. Some users have experienced issues with the large amount of memory used by the JDBC driver. This white paper is written for these users. If the performance of your application is acceptable, then you have no reason to worry about memory usage. If your application does not achieve the performance you expect and uses more memory than expected, then read on.

Oracle JDBC drivers for 7.3.4, 8i, and 9i use almost as little memory as possible. Unfortunately, this results in an unacceptable performance penalty. In 10g, the driver architecture was redesigned to improve performance. One of the most important changes in this architecture reconstruction is how the driver uses memory. The development team made a targeted decision to trade memory for performance. The 10g driver is therefore about 30% faster on average than the 9i driver. Of course, the data you measure may vary.

Since the price of memory is relatively cheap, and the size of physical memory has also increased significantly with the release of the 10g driver, most users benefit from performance improvements (for memory-for-performance). Some users, and most of them have very large-scale applications, have seen performance issues caused by excessive heap size, garbage collector trashing, and even OutOfMemoryExceptions. In subsequent releases, the development team has been working hard to resolve these issues, by providing users with additional (memory use) control by improving the way drivers use memory, to address these specific issues. This white paper describes how drivers use memory, how application characteristics specifically affect memory use, and what users can do to better manage memory use and improve application performance.

Note: The word "driver" in the rest of this article refers to the Oracle JDBC driver 10g and later. In other cases, a specific or referenced version is indicated. Where are they all used?

The 10g Oracle JDBC driver has a larger and more complex class hierarchy than previous versions. Objects of these classes store more information and therefore require more memory. This does increase memory usage, but it is not the real problem. The real problem is the buffer used to store the query results. Each statement (including PreparedStatement and CallableStatement) holds two buffers, a byte [] (byte array) and a char [] (character array). char [] is used to store row data of all character types, such as: CHAR, VARCHAR2, NCHAR, etc. byte [] is used to store all other types of row data. These buffers are allocated when the SQL is parsed, usually when the Statement is first executed. Statement will hold these two buffers until it is closed.

Because the buffer is allocated during SQL parsing, the size of the buffer does not depend on the actual length of the row data returned by the query, but the maximum possible length of the row data. During SQL parsing, the type of each column is known, and from this information the driver can calculate the maximum length of memory required to store each column. The driver also has a fetchSize property, which is the number of rows returned by each fetch. With the size of each column and the number of rows, the driver can calculate the maximum absolute length of the data returned by a fetch. This is the size of the allocated buffer.

Some large types, such as LONG and LONG RAW, will be treated differently because they are too large to be stored directly in the buffer. If the query result contains a LONG or LONG RAW, after setting fetchSize to 1, the memory problems encountered will become much clearer. This type of problem is not discussed here.

Character data is stored in a char [] buffer. Each character in Java occupies two bytes. A VARCHAR2 (10) column will contain up to 10 characters, which is 10 Java characters, or 20 bytes per row. A VARCHAR2 (4000) column will occupy 8K bytes per row. What matters is the defined size of the column, not the size of the actual data. A VARCHAR2 (4000) column that contains only NULL, still requires 8K bytes per row. The buffer is allocated before the query results seen by the driver, so the driver must allocate enough memory to cope with the largest possible row size. A column defined as VARCHAR2 (4000) can contain up to 4000 characters. The buffer must be large enough to hold 4000 character allocations, although the actual resulting data may not be that large.

BFILE, BLOB, and CLOB are stored as locators. Locator can be up to 4K bytes, and byte [] of each BFILE, BLOB and CLOB column must have at least 4K bytes per row. RAW columns can contain up to 4K bytes. Other types require very few bytes. A reasonable approximation is to assume that all other types of columns occupy 22 bytes per row. example

CREATE TABLE TAB (ID NUMBER (10), NAME VARCHAR2 (40), DOB DATE)

ResultSet r = stmt.executeQuery ("SELECT * FROM TAB");

When the driver executes the executeQuery method, the database will parse the SQL. The database will return a result set with three columns: a NUMBER (10), a VARCHAR2 (40), and a DATE. The first column needs (about) 22 bytes per row. The second column requires 40 characters per line. The third column needs (about) 22 bytes per row. Therefore, each line requires 22 + (40 * 2) + 22 = 124 bytes. Remember that each character requires two bytes. fetchSize is 10 lines by default, so the driver will allocate a char [] 10 * 40 = 400 characters (800 bytes) and a byte [] 10 * (22 + 22) = 440 bytes, for a total of 1240 bytes. 1240 bytes will not cause any memory problems. But some query results will be bigger.

In the worst case, consider a query that returns 255 VARCHAR (4000) columns. Each row and column requires 8K bytes. Multiply by 255 columns, and each row is 2040K bytes or 2MB. If fetchSize is set to 1000 rows, the driver will try to allocate a 2GB char []. It would be terrible.

As a Java developer, readers will undoubtedly think of other ways to allocate and use memory. Please rest assured that the Oracle JDBC development team can also think of it. The other methods were tested before choosing one. Although we haven't got into the details, there are indeed many good reasons for us to choose this way. This choice was made solely for driver performance. Manage buffer size

Users can manage the size of these buffers in several ways.

Define table carefully

2. Write the query code carefully

3. Carefully set the value of fetchSize

A VARCHAR2 (20) column is enough to define a VARCHAR2 (4000) will cause a very big difference. A VARCHAR2 (4000) column requires 8K bytes per row. A VARCHAR2 (20) column requires only 40 bytes per row. If a column never actually exceeds 20 characters, then defining a VARCHAR2 (4000) column is a waste of buffer space allocated by the driver.

Using SELECT * when only a few columns are needed, in addition to the size of the buffer, it will also greatly affect performance. It will take more time to get the contents of the line, transform it, send it over the network, and then convert it to a Java expression. Although dozens of columns are returned, only a few are needed, which will cause the driver to allocate a large number of buffers to store those unwanted results.

The main tool for controlling memory usage is fetchSize. Although 2MB is also relatively large, most Java environments allocate such a large buffer without any problems. Even in the worst case, the results of fetchSize set to 1,255 VARCHAR (4000) columns will not cause problems in most applications.

The first step in solving a memory usage problem is to check the SQL. Calculate the approximate size of one row of data for each query, and then look at the size of fetchSize. If the size of each row is very large, consider whether it is possible to get fewer columns or modify the schema to make the data more compact to limit the row size. Finally, set fetchSize to keep the buffer at a reasonable size. What is "reasonable" depends on the specifics of the application. Oracle recommends that fetchSize not exceed 100, although in some cases larger values may be appropriate. Even if many rows are returned, setting fetchSize to 100 may be inappropriate.

Note: The method OracleStatement.defineColumnType provided by Oracle can be used to reduce the size of excessive column definitions. When a size parameter is provided, the size will override the size defined by the columns in the schema. This allows you to solve the problem to a certain extent without being able to modify the schema freely. When using the Thin driver, you can call defineColumnType only on the column in question. When using the OCI driver, you can call it on a specific Statement, or you can call it for all columns of all Statements. If you can adjust the schema, it is certainly a better choice.

A statement is not a problem. Except for pathological situations like 255 VARCHAR2 (4000) columns or setFetchSize (100000), a single statement is unlikely to cause problems with memory usage. In practice, the problem only occurs when there are hundreds or even thousands of Statement objects in the system. A very large system may have hundreds of connections open at the same time. Each connection may have one or two statements open at the same time. Such a very large system will run on a computer with very large physical memory. With a reasonable configuration, even a very large system with hundreds of statements open is unlikely to have serious memory problems. Yes, although the driver will use a lot of memory, the memory is used to be used. Therefore, in practice, even very large systems can still avoid memory problems.

Large systems often execute the same SQL many times. For performance reasons, reusing PreparedStatement instead of creating it for each SQL each time will be very helpful to improve performance. Such a large system can have many PreparedStatements, and each special SQL string will have one (or more). Most large systems use a modular framework, such as WebLogic. Independent components within the framework create the PreparedStatement they need. This goes against the idea that PreparedStatements need to be reused. To solve this problem, the framework provides a statement cache. With the statement cache, a connection is likely to hold a hundred or more PreparedStatements in memory. Multiply by hundreds of connections and you might really have potential memory issues.

Oracle's JDBC driver solves this problem through the built-in driver statement cache, Implicit Statement Cache. (There is also a cache called Explicit Statement Cache, which is not discussed here). Implicit Statement Cache is transparent to users. User code calls prepareStatement just like creating a new object. If the driver can get the available PrepareStatement from the cache, it will return it directly. If not retrieved then a new object is created. User code cannot semantically distinguish between a newly created and a duplicate PrepareStatement.

From a performance perspective, fetching from the cache is faster than creating a new statement. The execution of a cached statement is much faster because the driver can reuse many states from the previous execution of the statement.

Implicit Statement Cache knows the internal structure of PreparedStatement in Oracle JDBC. Therefore, it can manage the structure for best performance. In particular it can manage char [] and byte [] buffers. Different versions of the driver provide different buffer management methods. This is because Oracle better understands the actual application requirements for buffer management. But no matter what, the Oracle JDBC driver only works when the PreparedStatement is returned to the Implicit Statement Cache. 

At this time, these char [] and byte [] buffers can be managed. If the PreparedStatement is not closed, is there any way for the driver to know that the statement will not be reused immediately, so it cannot do anything to manage buffer usage. Statement Batching and memory usage

The row data buffer is not the only large buffer created by Oracle's JDBC driver. They also create large buffers for the PreparedStatement parameters sent to the database. Compared with writing data, applications usually read more data and write smaller data blocks each time. Therefore, the parameter buffer is often much higher than the data buffer of the row. However, using (incorrect use) statement batching may also force the driver to create large buffers.

When an application calls PreparedStatement.setXXX to set parameter values, the driver needs to store these values. This requires very little memory; just a reference to an array and an object type (such as String), long and double require 8 bytes, and other types require 4 bytes. When executing PreparedStatement, the driver must send these values to the database as SQL data types instead of Java types. The driver creates a byte [] and a char [] buffer. Parameter data is converted to SQL type and stored in these buffers. The driver then sends these bytes to the network. Because the driver already knows the actual size of the data before it allocates the buffer, it can create the smallest possible buffer. And if a statement is executed multiple times, the driver will try to reuse these buffers. If the new data value requires a larger byte [] or char [] buffer, the driver allocates a larger buffer. As long as there is a suitable size of memory, executing a statement will not cause the problem of insufficient memory. But with statement batching, the situation is different.

Both the JDBC standard and Oracle statement batching execute a statement multiple times in one operation. To do this, the driver must send all parameters of all executes of the PreparedStatement at once. This means that the driver must convert the data of all parameters to SQL types and store them in buffers. In a batch, the number of executes in a batch, batchSize, is similar to fetchSize in a query. Although the parameter conversion required for the execution of a single statement is unlikely to cause memory problems, very large batchSize may cause problems.

In practice, this is an uncommon problem, and it only occurs when the batchSize is set very unreasonably large (tens of thousands). Calling executeBatch every few hundred lines should solve the problem. Memory management for specific versions

This section describes how different versions of the Oracle JDBC driver manage buffers and how users can adjust the driver for maximum performance.

Note: When discussing the details of memory management, ignoring the fact that there are two buffers, byte [] and char [], will make the discussion easier. In the rest of this section, "buffer" may refer to two buffers at the same time or separately, byte [] and char [].

Although Java's memory management is pretty good, large buffer allocations are expensive. Actually it is not the cost of the actual malloc. That's actually very fast. Instead, the problem is the requirements of the Java language, all of these buffer requirements are filled with zeros. So it is unfair to have a large buffer allocated, and it must also be zero-filled. Zero padding requires writing every byte of the buffer. Modern processors deal with small buffers because there are multiple levels of data caches. A large buffer of zero padding exceeds the size of the processor's data cache and is performed in memory at the speed of the memory read and write, which is significantly lower than the maximum operating speed of the processor. Performance tests have shown many times that buffer allocation is a huge performance bottleneck for drivers. This is a tangled place to balance the time and memory cost of buffer allocation and reuse of buffers. Oracle Database version 10g Oracle JDBC driver

The original 10g driver used a more primitive memory management method. This memory management method is for maximum performance. When a PreparedStatement is executed for the first time, the necessary byte [] and char [] buffers are allocated. That's it. The buffer is only freed when the PreparedStatement itself is freed. The Implicit Statement Cache does nothing for the management buffer. All PreparedStatements cached in the Implicit Statement Cache hold their allocated byte [] and char [] buffers and are ready to be reused immediately. Therefore, the only way to adjust the memory management of this version of the driver is by setting setFetchSize, carefully designing the schema, and carefully writing SQL query statements. The original 10g drive was quite fast, but there may be memory management issues, including OutOfMemoryException issues. Oracle Database version 10.2.0.4 Oracle JDBC driver

The 10.2.0.4.0 driver added a connection property to address the memory management issues that appeared with the initial 10g driver. This connection property takes a one-size-fits-all approach. If set, returns a PreparedStatement to the Implicit Statement Cache and releases its buffer. When the statement is fetched from the cache, the buffer will be reallocated. This simple method greatly reduces memory usage, but sacrifices huge performance costs. As mentioned above, allocating buffers is expensive.

This connection property is

oracle.jdbc.freeMemoryOnEnterImplicitCache

Its value is a boolean string, "true" or "false". If set to "true", the buffer will be released when a PreparedStatement returns to cache. If set to "false" (the default is also "false"), the buffer is retained, as is the original 10g driver. This property can be set to the System property via -D or as the connection property when the getConnection method is called. Please note that setting freeMemoryOnEnterImplicitCache will not cause the parameter value buffer to be released, it will only affect the data buffer of the row. Oracle database version 11.1.0.6.0 Oracle JDBC driver

The JDBC development team recognizes that using the all-or-nothing approach in 10.2.0.4.0 is not ideal. The 11.1.0.6.0 driver provides a more sophisticated approach to memory management. This method achieves two goals, minimizing unmemory usage and minimizing the cost of buffer allocation. The driver creates a buffer cache inside each connection. When a PreparedStatement is returned to the Implicit Statement Cache, its buffer is cached in the buffer cache. When a PreparedStatement is fetched from the Implicit Statement Cache, the buffer will also be fetched from the buffer cache at the same time. Therefore, the PreparedStatement in the Implicit Statement Cache no longer holds a large buffer, and the buffer will be reused multiple times instead of being created multiple times. Compared with the 10g driver, whether or not freeMemoryOnEnterImplicitCache is used, it significantly improves the driver's performance.

As pointed out in the introduction, the size of the buffer can vary greatly, from zero to tens or even hundreds of MB. The buffer cache of 11.1.0.6.0 is very simple, and it turns out that it is too simple. All buffer sizes are the same. Because the buffer can be used by any PreparedStatement in the Implicit Statement Cache, all buffers have to be large enough to meet the needs of the PreparedStatement with the largest buffer requirement. If only one statement is used at the same time, then there will be only one buffer, and the buffer will be used by all PreparedStatements. For some or most of the statements, the buffer may be too large, but the buffer size will at least exactly match the size of a statement in the cache. If the reuse of PreparedStatement is not too biased, then maintaining large buffers and reusing them will get relatively good performance, as opposed to maintaining larger buffers and reallocating larger buffers as needed of. If multiple statements are opened at the same time, and one of the PreparedStatement's required buffers is too large, there will be a potential memory problem.

Consider an application where one PreparedStatement requires a 10MB buffer and the rest requires a smaller buffer. As long as only one PreparedStatement per connection is being used at the same time, and that large PrepareStatement is often reused, there is no problem. Each statement will receive this unique 10MB buffer when it is used. When PreparedStatement returns Implicit Statement Cache, the buffer is returned to the buffer cache. This unique 10MB buffer is created only once and is repeatedly reused by multiple PreparedStatements. Now consider the case if two PreparedStatements are opened at the same time. Both need buffers. Because a PreparedStatement may be allocated any buffer, all buffers must be the same size, that is, the maximum length. When two PreparedStatements are opened at the same time, both buffers must be 10MB. When the second PreparedStatement is opened, even if only a small buffer is needed, a 10MB buffer still needs to be created. If three statements are open at the same time, a third 10MB buffer will be created. In a large system, when hundreds of connections and hundreds of PreparedStatements are opened at the same time, allocating a maximum length buffer for each PreparedStatement may cause excessive memory usage. This is obvious, but the development team did not realize how big this problem was, and did not find it during internal testing. This problem can be appropriately alleviated by proper schema design, careful SQL writing, and correct fetchSize setting.

It should be noted that the 11.1.0.6.0 driver does not support freeMemoryOnEnterImplicitCache, and the buffer will always be released when the PreparedStatement returns to the cache. The released buffer will be put into the buffer cache. Oracle database version 11.1.0.7.0 Oracle JDBC driver

The 11.1.0.7.0 driver introduced a connection property to solve the problem of large buffers. This property limits the maximum size of the buffer stored in the buffer cache. All oversized buffers will be freed when the PreparedStatement returns the Implicit Statement Cache, and the corresponding buffer will be re-created when the PreparedStatement is taken out of the cache. If most PreparedStatements require a moderately sized buffer, such as less than 100KB, but some require larger buffers, setting this property to 110KB will make small buffers highly reused without the cost of always creating a maximum-length buffer . Setting this property can improve performance and even prevent OutOfMemoryException issues.

This connection property is

oracle.jdbc.maxCachedBufferSize

Its value is an int string, such as "100000". The default is Integer.MAX_VALUE. Which can be stored in The maximum size of the buffer in the buffer cache. This size limitation applies to byte [] buffers even for char []. For char [] buffer it is the number of characters and for byte [] buffer it is the number of bytes. It's just the maximum buffer size, not a predefined size. If maxCachedBufferSize is set to 100KB, but the maximum buffer size does not exceed 100KB and only 50KB, then the buffer size in the buffer cache will be 50KB. The change of the maxCachedBufferSize value will only affect the performance if the char [] and byte [] buffers of different lengths are included or excluded from the driver's internal buffer cache. Large changes, even a few megabytes, may not make a difference. Similarly, changing one size may also make a huge difference. If it is, a PreparedStatement buffer is included or excluded from the buffer cache. This property can be set to the System property via -D or the connection property via getConnection.

If you need to set maxCachedBufferSize, then start to estimate the buffer size of the SQL query with the largest buffer required. In the process, you will also find that you can meet the required performance requirements by adjusting the fetchSize of these queries. Carefully consider the frequency of execution and the size of the buffer, choose a suitable size so that most statements can use the buffer buffer, and the buffer size is small enough to enable the Java runtime to support a large number of buffers to minimize the creation of new buffer frequency.

Some programs have a large number of (relative to the number of threads) idle connections. The application may need to connect to any of many databases, but only one of them will be connected at a time. If almost every thread uses a connection to connect to the database, there will be many idle connections relative to the number of threads. Since the buffer cache is attached to each connection by default, the result of idle connections is that there are many buffers in the buffer cache that will not be used. This means that more memory is unnecessarily used. Although this is a relatively rare case, it is not unknown.

The solution to this situation is to set the connection properties

oracle.jdbc.useThreadLocalBufferCache

The value of this property is a boolean string, "true" or "false". The default value is "false". When this property is set to "true", the buffer cache is stored in a ThreadLocal, not directly in the connection. If there are fewer threads than connections, this will reduce memory usage. This property can be set through the -D system property or through the connection property when calling getConnection.

All connections useThreadLocalBufferCache = "true" share the same static ThreadLocal field, thus using the same set of buffer caches. All connections with useThreadLocalBufferCache = "true" must set the same value for maxCachedBufferSize. If a thread uses one connection first and then another connection, the two connections will have some indirect effects on each other due to the number and size of buffers used. Under normal circumstances, all connections using the ThreadLocal buffer cache belong to the same application, so this is not a problem. If a thread creates a statement and another thread closes the statement, the buffer will migrate from the buffer cache of one ThreadLocal to another. This is not recommended. If all statements are created by one thread and closed by another, then no buffers will be reused. So, this is not recommended, but if your application does just that, then don't set useThreadLocalBufferCache to "true". It is also possible to make some connections use the ThreadLocal buffer cache and some use the default connection internal buffer cache. Oracle Database Version 11.2 Oracle JDBC Driver

The 11.2 driver has more complex buffer caches than 11.1.0.7.0. This buffer is cached in multiple buckets. All buffers in a bucket are the same size and the size is predetermined. When a PreparedStatement is executed for the first time, the driver takes one from a buffer bucket whose buffer size is greater than the result length but has the smallest buffer size buffer (the translation is a bit detoured as follows When a PreparedStatement is executed for the first time the driver gets a buffer from the bucket holding the smallest size buffers that will hold the result.). If there is no buffer in the bucket, the driver creates a new corresponding size buffer according to the predefined size of the bucket. When a PreparedStatement is closed, the buffer is returned to the corresponding bucket. Because buffers are used for variable-size requirements, buffers are usually only larger than the smallest ones. This difference is limited and has no effect in practice.

Compared to the 11.1 or 10.2.0.4.0 driver, the 11.2 driver will always use the same or less memory. Sometimes an incorrect statement is that the 11.2 driver can run in a heap that is too small for unacceptable performance. Just because it can run in less memory does not mean that it should be deployed as such. Setting a large -Xms value for the 11.2 driver to greatly improve performance is not a rare behavior. See Controlling Java Heap below.

The 11.2 driver supports maxCachedBufferSize, but it is obviously less important. In 11.1, setting maxCachedBufferSize correctly may change the situation from OutOfMemoryException to superior performance. In the 11.2 driver, the setting of maxCachedBufferSize can sometimes improve the performance of very large systems with very large SQL statements, a large number of statement caches, and a large number of SQLs with significantly different buffer size requirements. The maxCachedBufferSize value in 11.2 is interpreted as the maximum size of the buffer, which is the base-2 logarithm of the value. For example, if maxCachedBufferSize is set to 20, the maximum buffer cache size is 2 ^ 20 = 1048576. For backward compatibility, values greater than 30 are interpreted as the actual size, not the size of log2, but a logarithmic approach of 2 is recommended to set this value.

Under normal circumstances, setting maxCachedBufferSize to a reasonable value has no effect. If you need to set maxCachedBufferSize, try it from 18. If you have to set a value less than 16, then you may need more memory.

In the 11.2 driver, the parameter data buffer uses the same buffer cache and cache scheme. When a PreparedStatement is placed in the Implicit Statement Cache, the parameter data buffer is cached by the buffer cache. When a PreparedStatement is executed for the first time or executed from the Implicit Statement Cache for the first time, it will obtain the parameter data buffer from the buffer cache. Generally, the size of the parameter data buffer is much smaller than the size of the returned row data buffer, but these buffers may also become very large due to the large batch size. The 11.2 driver also provides large byte [] and char [] buffer buffers for other large field operations such as Bfile, Blob, and Clob.

The 11.2 driver also supports useThreadLocalBufferCache. Its features and recommendations on when and how to use it are the same as in 11.1.0.7.0.

The 11.2 driver also added a new property to control the Implicit Statement Cache.

oracle.jdbc.implicitStatementCacheSize

The value of this attribute is an integer string, such as "100", which is the initial size of the statement cache. Setting this property to a positive value enables the Implicit Statement Cache. The default value is '0'. This property can be set to the Sytstem property through -D or the connection property when calling getConnection. Calling OracleConnection.setStatementCacheSize and / or OracleConnection.setImplicitCachingEnabled will override the value of implicitStatementCacheSize. This property makes it easier to enable this feature if Implicit Statement Cache cannot be enabled in your code. Controlling Java Heap

Optimizing Java runtime memory usage is a bit of a black box art. The two most important options are -Xmx and -Xms. Depending on the Java runtime version and operating system, there are some other parameters. -Xmx sets the maximum heap size that the Java runtime can use. -Xms just set the initial heap size. The default value depends on the operating system and Java runtime version, but the default value of -Xmx is 64MB and the default value of -Xms is 1MB. The 32-bit JVM supports up to 2GB of heap. The 64-bit JVM supports larger heaps. These parameters accept "k", "m", and "g" as suffixes to indicate whether they are kilo-, mega-, or giga-bytes, for example, -Xmx1g.

When there is sufficient heap size, Oracle's JDBC driver can provide the best performance. For most applications, after increasing the size of the heap to a lower limit, the performance of the application will be greatly improved. After the size exceeds this lower limit, it makes no difference. If the size of the heap is so large that it exceeds the physical memory running on the machine, the heap memory will be swapped out to secondary storage, and performance will be severely affected. When setting the heap size, just setting -Xmx is not enough. The 11.2 driver, with particular emphasis on using as little memory as possible, does not generally increase the heap beyond the minimum level at which it can run. If you only set the maximum heap size, -Xmx, the driver may never actually use that much memory. If you also increase the minimum heap size with -Xms, the driver will be able to use more memory and provide better performance. It is important to do a performance test with a fixed heap size, that is, set -Xmx and -Xms to the same value. Applications in general production environments also run in a fixed heap size. Setting the -server option will make the JVM reduce the effort to minimize the heap size, so it will provide some performance improvements. Setting -Xms appropriately will often provide more performance improvements than setting the -server option alone. Oracle recommends setting the -server, -Xms, and -Xmx options for server applications. Usually -Xms and -Xmx should be set to the same value. in conclusion

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.