WebLogic Server Performance Tuning __web

Source: Internet
Author: User
Tags garbage collection instance method cpu usage

Although the WebLogic version has now been upgraded to 10.3 and is packaged as Oracle Funsion middleware 11g, the new data is relatively small, sharing a WebLogic Classic version 8.1 performance tuning article.

--------------------------------------------------------------------------------------------------------------- ----

Any successful product in the market has good performance. While it takes a lot of features to become a widely used product like WebLogic server, performance is absolutely essential.

Good programming habits play a big part in helping applications run, but they are not enough. Application servers must be able to migrate between multiple hardware and operating systems, and must be versatile to handle a wider range of application types. This is why the application server provides a rich debug "button", and by adjusting these buttons, you can make the server more suitable for the environment and the application.

This article discusses some of these debugging parameters for WebLogic, but does not list all of the resizable properties. In addition, before applying the methods recommended here to the product environment, it is recommended that you test them in the test environment first.

performance monitoring and bottleneck discovery

The first step in performance tuning is to isolate the "danger zone". Performance bottlenecks can exist in any part of the system-network, database, client, or application server. It is important to first determine which system component is causing the performance problem, and debugging the wrong component may make the situation worse.

WebLogic Server provides a management console and command-line tools for system administrators to monitor system performance in two ways. The server side has a collection called Mbean that collects information such as thread consumption, resource surplus, cache usage, and so on. Both the console and the command line manager can invoke the information from the server. A screenshot of Figure 1 shows the usage and surplus of caching in the EJB container, which is one of the performance monitoring options provided by the console.

Code Analyzer is also an effective tool for applying code to detect its own performance bottlenecks. There are several good code analyzers, such as: Wily Introscope, Jprobe, Optimizelt.

EJB Container

The most expensive operations in the EJB container are, of course, database calls-loading and storing entity beans. The container also provides a variety of parameters to reduce the number of accesses to the database. However, at least one mount operation and one storage operation is necessary in each transaction of each bean, except in exceptional circumstances. These special circumstances are:
1. The bean is read-only. At this point, the bean simply mounts once on the first visit and never requires a storage operation. Of course, if the parameter read-timeout-seconds settings are exceeded, the bean will be loaded again.

2. The Bean has a dedicated or active concurrency policy, and the parameter db-is-shared is set to false. This parameter was renamed to Cache-between-transactions in WebLogic Server 7.0. Parameter db-is-shared set to false equivalent parameter cache-between-transactions set to True.

3. The bean has not been modified in the transaction, and the container optimizes the storage operation.

If this is not the case, each entity bean in the code path is loaded and stored at least once per transaction. Some features can reduce the call to the database or reduce the cost of database calls, such as caching technology, domain (field) grouping, concurrency policies, and tight associative caching (eager relationship caching), some of which are WebLogic Server 7.0 added.

Cache: The size of the entity bean cache space is defined by the parameter Max-beans-in-cache in Weblogic-ejb-jar.xml. The container is invoked from the database the first time the bean is loaded in the transaction, and the bean is also placed in the cache. If the cache space is too small, some beans are stranded in the database. This way, these beans must be reloaded from the database the next time they are called, without taking into account the first two special cases mentioned earlier. Invoking a bean from the cache also means that you do not have to invoke Setentitycontext () at this time. If the Bean's key (primary) key is a combined field or is more complex, you can dispense with the time to set them.

Domain grouping: Domain grouping is the domain that the lookup method specifies to load from the database. If an entity bean is associated with a larger BLOB field (for example, an image) and is rarely accessed, you can define a domain group that excludes this domain, which is associated with a lookup method, so that the BLOB field is not loaded when the lookup occurs. This feature is only applicable to EJB2.0 beans.

Concurrency policy: in WebLogic Server 7.0, the container provides four kinds of concurrency control mechanisms. They are exclusive, database-style, active, and read-only. The concurrency policy is closely related to the isolation level when the transaction is in progress. Concurrency control is not really a measure of performance improvement, its primary purpose is to ensure consistency in the data represented by the entity bean, which is enforced by the Bean's deployer. In any case, some control mechanisms allow the container to process requests faster than others, but this speed is at the expense of data consistency.
The strictest concurrency policy is exclusive, and access to the bean is serialized using a special primary key, so only one transaction can access the bean at a time. While this provides good concurrency control within the container, performance is limited. This approach is useful when allowing mutual caching between transactions, but cannot be used in a clustered environment, where loading operations are optimized and may result in loss of parallelism.

A database-style concurrency policy differs from concurrency control in a database. Entity beans are not locked in the container, allowing multiple transactions to concurrently operate on the same entity bean, thereby improving performance. Of course, this may require a higher level of isolation to ensure data consistency.

Positive concurrency policies are also different from database concurrency control. The difference is that the check for data consistency occurs when a set update operation is stored and not when the entire row is locked at load time. This strategy is faster than a database-style strategy if the conflict with the same bean access within the application is not very intense, although the two provide the same level of data consistency protection. However, in the event of a conflict, this policy requires the caller to initiate the call again. This feature also applies only to EJB 2.0.

Read-only policies can be used only for read-only beans. The bean is loaded only when the first access is applied or when the value specified by the parameter read-timeout-seconds is exceeded. The bean never needs to be stored. When basic data changes, the bean is also notified through the read-mostly format, causing a reload.

Tight associative caching: If two entity beans, bean A and bean B are associated within the CMR (container Relationship Management), two are accessed in the same transaction, and are loaded by the same database call, we are called tight associative caching. This is a new feature of WebLogic Server 7.0 and is only applicable to EJB2.0.

In addition to the features listed above that improve performance by optimizing access to the database within the container, there are other outside containers where parameters for session beans and entity beans can help improve performance.

Buffer pools and caches are the main features provided by the EJB container to improve the performance of session beans and entity beans. However, these methods are not applicable to all types of beans. Their negative side is higher memory requirements, although this is not a major problem. Buffer pools apply to stateless session beans (SLSB), message-driven beans (MDB), and entity beans. Once the buffer pool size is set for SLSB and MDB, many instances of these beans are created and placed in the buffer pool, and the Setsessioncontext ()/setmessagedrivecontext () method is invoked. The size of the buffer pool set for these beans does not have to exceed the configured number of execution threads (in fact, the requirements are smaller). If the method Setsessioncontext () is doing anything expensive, the Jndi query is complete, and using instance method calls in the buffer pool will speed up. For entity beans, after the Setentitycontext () method call is complete, the buffer pool is connected to the Bean's anonymous instance (without a primary key). These instances can be used by query operations, the query method takes an instance out of the buffer pool, assigns a primary key to it, and then loads the appropriate bean from the database.

Caching applies to stateful session beans (SFSB) and entity beans. The entity bean has been discussed earlier. For SFSB, caching avoids the ability to serialize to the hard disk. Serialization to a hard drive is expensive and should definitely be avoided. The cache size used for SFSB can be slightly larger than the number of concurrent clients connected to the server, because the container will try to detain the bean in the database only after 85% of the cache is occupied. If the cache is larger than is actually required, the container does not spend time on standby for the bean through the cache.

The EJB container provides two methods for Bean-to-bean and Web-tier-to-bean invocation operations: Passing value calls and sending address calls. If the bean is in the same application, by default, the method of sending the address is used, which is faster than the pass value. The method of sending addresses should generally not be prohibited unless there is sufficient reason to enforce it. Another way to enforce the use of a routing address is to use a local interface. Another feature introduced in WebLogic Server 7.0 is the use of activation (activation) for stateful services. Although this approach affects performance to some extent, scalability is greatly improved due to low memory requirements. If extensibility is not worth paying attention to, you can pass the parameter noobjectaction to EJBC to turn off activation (activation).JDBC

For database access, debugging JDBC is just as important as debugging an EJB container. Let's say you set the size of the connection pool-the connection pool should be large enough to accommodate all threads ' requirements for connectivity. If all access to the database can be implemented in the default execution queue, the number of connections should be the number of threads in the execution queue, less than the thread that reads the socket (the thread used to read the incoming request in the default execution queue). To avoid creating and deleting connections during run time, you can set the connection pool to its maximum capacity initially. If possible, make sure that the parameter testconnectionsonreserve is set to False (false, which is the default setting). If this parameter is set to True (true), the connection is tested before it is assigned to the caller, which requires additional connections to the database.

Another important parameter is the preparedstatementcachesize. Each connection sets a static cache for the macro statement, as the size is specified by the JDBC Connection pool configuration. Caching is static, and it is important to keep this in mind at all times. This means that if the size of the cache is n, then only the first n statements placed in the cache are executed. The way to ensure that expensive SQL statements enjoy caching is to store them in the cache with a startup class. Although caching technology improves performance to a large extent, it cannot be used blindly. If the format of the database changes, it is not possible to invalidate statements in the cache or replace them with a new one without restarting the server. Of course, the statements in the cache will keep the cursor in the database.

For WebLogic Server 7.0来, the improvements in jdriver performance have made it much faster than Oracle's thin drivers, especially for applications that perform a large number of select operations. This can be proven from HP-submitted two ecperf results from the beta version of the WebLogic Server 7.0 (http://ecperf.theserverside.com/ecperf/index.jsp?page=results /top_ten_price_performance).

JMS

The JMS subsystem provides a number of debugging parameters. A JMS message is handled by a stand-alone execution queue called Jmsdispatcher. Therefore, the JMS subsystem will not cause "nutritional deprivation" because of the contention for resources in applications running in the default or other execution queues, which in turn will not rob other applications of resources. For JMS, most debugging parameters are compromised on the quality of the service. For example, the use of file-type persistent destinations (file-persistent destnation) prevents synchronous writes (by setting attributes:-dweblogic.jmsfilestore.synchronouswritesenabled = FALSE) can cause a dramatic increase in performance, but at the same time risks losing messages or repeatedly receiving messages. Similarly, using multicast to send messages can improve performance while also risking the loss of information halfway.

The message confirmation interval should not be set too short-the greater the rate of sending acknowledgments, the slower the processing of messages may be. Also, if set too large, it means that the message will be lost or sent repeatedly when the system fails.

In general, you should configure multiple JMS destinations on a single server, rather than spreading them across multiple JMS servers, unless you no longer need extensions.

Turning off Message page scheduling (paging) can improve performance, but it can affect scalability. If you turn on Message page scheduling (paging), additional I/O operations are required to serialize the message to the hard disk, read it as necessary, but also reduce memory requirements.

Generally speaking, asynchronous process is better than synchronization process and easier to adjust.

Web Container

The web layer is used more in the application to generate expression logic. A widely used architecture is to read data from the application tier and then generate dynamic content using servlet and JSP, where the application layer is typically composed of ejbs. In this structure, the servlet and JSP retain references to EJBS in case they are directly talking to the database or data source. It's a good idea to save these references. If the JSP and servlet are not deployed on the same application server as the EJB, the cost of using Jndi to query is expensive.

JSP cache markers can be used to store data within a JSP page. These tags all support the input and output of the cache. The output of the cache involves the content generated by the code within the tag, and the input to the cache involves the assignment of the variable to the code within the tag. If you do not want the web layer to change frequently, you can turn off the automatic mount (auto-reloading) feature by setting Servletreloadchecksecs to 1. After using this method, the server will no longer poll the Web tier for changes, and if there are many JSPs and servlet numbers, the effect is obvious.

It is also recommended that you do not store too much information in an HTTP session. If the information is necessary, consider using a stateful session bean instead.

JVM Debugging

Most JVMs today have autonomic tuning because they can detect and optimize dangerous areas of code. The debug parameters that development and deployment personnel can consider are probably heap settings. There are no general rules for setting these. JVM General heap space, typically set to one-third or half of the entire heap space, in new or reserved space. The entire heap space cannot be specified too large to support concurrent memory garbage collection (GC) processing. In this setup environment, if the heap is too large, the interval for garbage collection should be set to one minute or longer. Finally, it is important to note that these settings depend to a large extent on the pattern of memory used by applications deployed on the server. For additional information about debugging the JVM, refer to:
http://edocs.bea.com/wls/docs70/perform/JVMTuning.html1104200.

Server Debugging

In addition to the debug parameters provided by each subsystem, there are parameters that apply to the server that can help improve performance. The most important of these is the number of configuration threads and execution queues. Increasing the number of threads does not always work, but consider using this method only if the following conditions are true: The scheduled throughput is not met, the waiting queue (the request that has not started processing) is too long, and the CPU is still remaining. Of course, doing so does not necessarily improve performance. Low CPU usage may be due to competition from other resources on the server, such as not enough JDBC connections. These similar factors should be taken into account when changing the number of threads.

In WebLogic Server 7.0, you provide the ability to configure multiple execution queues, and you can define execution queues in your deployment that handle special EJBS or Jsp/servlet requests. To do this, simply pass the flag-dispatchpolicy < queue name > to the bean when you run WEBLOGIC.EJBC. For Jsp/servlet, you can set the value of the initialization parameter (Init-param) Wl-dispatch-policy in the WebLogic deployment descriptor that sets the Web application to the name of the execution queue. Sometimes some bean/jsp in the application have longer response times than others, and you can set up separate execution queues for these bean/jsp. As for the size of the queue, it depends on experience to achieve the best performance.

Another big problem is deciding under what circumstances the WebLogic Performance Pack (http://e-docs.bea.com/wls/docs70/perform/WLSTuning.html-1112119) should be used. If the number of sockets is not too much (each server has a socket for a remote method call connection to the client JVM) and is always busy reading request data sent from the client, there is no noticeable improvement in the use of performance packs at this time. It is also possible that no performance pack can lead to similar or better results, depending on the specific implementation of the JVM in handling network I/O.

The socket read thread is taken from the default execution queue. In a Windows environment, each CPU has two socket read threads, and in the Solaris environment there are three sockets for local input and output (native I/O). For Java input and output (I/O), the number of read threads is set by the parameter Percentsocketreaderthreads in the configuration file config.xml. Its default value is 33%, and the upper limit is 50%, which is obvious, because if no thread is used to process the request, there will be no more read threads. For Java I/O, make the number of read threads as close to the number of client connections as possible, because Java I/O is blocked while waiting for a request. This is why, when the number of connections to the client increases, the number of threads cannot always increase equally.

Conclusions

We only discussed some of the ways to debug the server. It is to be remembered that a poorly designed application usually does not have good performance, regardless of how the server and its parameters are debugged. Throughout the various stages of the application development cycle-from design to deployment-performance should always be a key factor to consider. It is often the case that performance is placed after the function, and it is difficult to change it when it is discovered. For additional information about WebLogic Server performance debugging, refer to: http://e-docs.bea.com/wls/docs70/perform/index.html.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.