Java EE: cater to Web 2.0

Source: Internet
Author: User
Tags benchmark connection pooling thread

Many successful enterprise applications are built using the Java EE platform. However, the principles of Java EE are not designed to effectively support WEB 2.0 applications. A deeper understanding of the disconnect between Java EE and Web 2.0 principles can help you make informed decisions and use a variety of methods and tools to address this disconnect to some extent. This article will answer why Web 2.0 and the standard Java EE platform fail as a combination, and demonstrate why an event-driven asynchronous architecture is better suited to Web 2.0 applications. This article also describes the frameworks and APIs that enable the Java platform to be more WEB 2.0-enabled by supporting asynchronous design.

Java EE principles and assumptions

The Java EE platform is created to provide support for enterprise to customer (consumer) and Enterprise to Enterprise (business-to-business) applications. The enterprise has discovered the value of the Internet and started using it to enhance existing business processes with partners and customers. These applications typically interact with an existing enterprise Integration System (EIS). Most common benchmark tests (testing Java EE server performance and scalability) are-ecperf 1.1, SPECjbb2005, and specjappserver2004-use cases that reflect this in both the business-to-business and the EIS. Similarly, the standard Java Petstore demo is also a typical e-business application.

Many of the obvious and implied assumptions about the scalability of the Java EE architecture are reflected in the benchmark:

From the client's point of view, request throughput is the most important feature that affects performance.

Transaction duration is the most important performance factor, and reducing the duration of all individual transactions improves the overall performance of the application.

Transactions are usually independent of each other.

In addition to long-running transactions, only a few business objects are affected by the transaction.

The performance of the application server and the EIS deployed in the same admin domain limit the duration of the transaction.

Certain network communication costs can be offset by using connection pooling (generated when processing local resources)

The transaction duration can be shortened by optimizing the network configuration, hardware, and software.

The application owner can control content and data. In the context of not relying on external services, the most important limiting factor for providing content to users is bandwidth.

Performance and Scalability Issues

The Java EE platform was originally designed to use resource manipulation services that are deployed in a single admin domain. The assumption is that the EIS transaction lifetime is short and requests are processed quickly, so that the platform can support higher transaction load.

Many emerging architectural approaches and patterns-such as peer-to-peer, service-oriented architecture, and new Web applications collectively (informally) for Web 2.0-do not satisfy these assumptions. In the use scenarios for these applications, request processing takes a longer time. Therefore, when you develop a WEB 2.0 application using the Java EE method, there are serious performance and scalability issues.

These ideas generate the following Java EE API building principles:

Synchronization APIs. Java EE uses the synchronization API (the heavyweight and cumbersome Java message Service (JMS) APIs are essentially the only exceptions) in many applications. This requirement derives more from the need for availability than from performance requirements. The synchronization API is easy to use and has low overhead. However, there are serious problems when you need to work with large multithreading, so Java EE strictly restricts uncontrolled multithreaded processing.

A limited pool of threads. It was quickly discovered that threads were an important resource, and that the performance of the application server would drop significantly when the number of threads exceeded a certain threshold. However, depending on the short term of each operation, these operations can be assigned to a limited set of threads to maintain higher request throughput.

A limited connection pool. If you are using only one database connection, it is difficult to achieve optimal database performance. Although some database operations can be performed in parallel, adding additional database connections can only speed up the application to a certain point. When the number of connections reaches a certain value, database performance begins to decline. Typically, the number of database connections is less than the number of threads available in the servlet thread pool. Therefore, the connection pool is created to allow a connection to be allocated to server components-such as servlet and Enterprise JavaBeans (EJB)-and to be returned to the connection pool at a later time. If the connection is not available, the component waits for the connection to block the current thread. This delay is usually shorter because other components only take a short time for the connection.

A fixed resource connection. Applications are assumed to use only a very small number of external resources. Connection factories with individual resources are obtained through Java naming and Directory Interface (JNDI) (or dependency injection of EJB 3.0). In fact, the main Java EE APIs that support connectivity to different EIS resources are the Enterprise Web services APIs only. Most other APIs assume that resources are fixed and that only additional data, such as user credentials, should be provided to open connection operations.

In Web 1.0, these principles play very well. Some unique applications can be designed to conform to these rules. However, these principles do not effectively support WEB 2.0.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.