J2ee| Strategy | performance
For Java EE, we know that when developing applications, decisions in the architectural design phase will have a profound impact on the performance and scalability of the application. Now, when developing an application project, we are increasingly aware of the problem of performance and scalability. The problem of applying performance is often more severe than the problem of the application functionality, which affects all users, and the latter only affects those who happen to be using the feature.
As the head of the application system, it has been asked to "do more with less"----with less hardware, less network bandwidth, and a shorter time to accomplish more tasks. Java EE is the best way to do this by providing component mode and common middleware service. To be able to build a high-performance and scalable Java EE application, you need to follow some basic architectural strategy.
caching (Caching)
Simply put, the cache holds frequently accessed data that is stored in persistent memory or stored in memory throughout the lifetime of the application. In a real-world environment, a typical phenomenon is that there is a cached instance in each JVM in a distributed system or a cached instance in multiple JVMs.
Cached data improves performance by avoiding access to persistent storage, which can result in excessive disk access and too frequent network data transfer.
Copy
Replication is the overall greater throughput efficiency by creating multiple copies of the specified application service on multiple physical machines. Theoretically, if a service is replicated to two services, then the system will be able to handle twice times the request. Replication improves performance by reducing the load per service for multiple instances of a single service.
Parallel Processing
Parallel processing breaks a task into simpler subtasks and can be executed simultaneously in different threads.
Parallel processing is to improve performance by leveraging the multithreading and multi CPU features of the Java EE layer execution pattern. Handling multiple subtasks in parallel, as opposed to using one thread or CPU processing task, enables the operating system to assign these subtasks to multiple threads or processors.
Asynchronous Processing
Application features are usually designed to be synchronized or serial. Asynchronous processing handles only those very important parts of the task, and then immediately returns control to the caller, and the rest of the task part is later executed.
Asynchronous processing improves performance by shortening the time that must be processed before the control is returned to the user. Although all do the same thing, the user does not have to wait until the whole process is complete to continue making the request.
Resource Pools
Resource pool technology uses a set of prepared resources. Unlike a relationship that maintains 1:1 between requests and resources, these resources can be shared by all requests. The use of resource pools is conditional, and the costs of the following two ways are measured:
A, the cost of maintaining a set of resources to be shared by all requests
B, the cost of recreating a resource for each request
When the current person is less than the latter, using a resource pool is efficient.