High-performance architecture for Web sites---Application Server performance optimization

Source: Internet
Author: User
Tags website performance

The application server is the server that handles the website business, the business code of the website is deployed here, is the website development most complex, the change most place, the optimization means mainly has the cache, the cluster and the asynchronous and so on.

    • Distributed cache

Caches are ubiquitous, both in the browser and in the server and database, in both the data cache, the file cache, and the page fragments.

Website performance Optimization The first law: prioritize using caching to optimize performance.

    1. Fundamentals of Caching

Caching is the storage medium in which data is stored at relatively high access speeds. On the one hand, cache access is fast, can reduce access time, on the other hand, if the cached data is computed, then the cached data can be used directly without duplication, so the cache also reduces the computational time.

The essence of caching is a memory hash table, in which the data cache is stored in a hash table in the form of a pair of key and value. The cache is primarily used to store data with high read and write frequency and very little variation. When the application reads the data, it reads from the cache first, reads from the cache, accesses the database if it is not read or the data is invalidated, and writes the data to the cache.

2. Fair use of the cache

There are many benefits to using caching to improve system performance, but unreasonable use of caching can be cumbersome and even risky, not only to improve systems.

  frequently modified data: If the cache is stored in frequently modified data, the data is written to the cache, the application is too late to read the cache, the data has been invalidated, increasing the system burden, may also read dirty data.

  no hotspot access: The cache uses memory as storage, and if the application accesses data without hotspots, then the cache is meaningless.

  inconsistent data and dirty reads : Generally, the cached data set has a failure time, once the expiration time, it will be reloaded from the database, so the application to tolerate a certain time of inconsistent data, in the application of this delay is usually acceptable, but in specific applications still need to be treated with caution. Another strategy is to update the cache as soon as the data is updated, but this also leads to more overhead and transactional consistency issues

  Cache Availability : When the caching service crashes, the database will go down because it is completely under the pressure, and the entire Web site is unavailable, which is known as a cache avalanche, where this failure occurs, or even simply restarting the cache server and the database server to restore site access. In practice, some websites increase the availability of the cache by caching hot spares: When a cache server goes down, the cache access is switched to the standby server. With distributed cache server clusters, caching data is distributed across multiple servers in the cluster to improve the availability of the cache to some extent. When a cache server goes down, only some of the cached data is lost, and loading this part of the data back from the database does not have a significant impact on the database.

3. Distributed Cache Architecture

Distributed cache refers to the cache deployed in a cluster of multiple servers, providing the caching service as a cluster. One is the distributed cache, which is represented by JBoss cache, which needs to be updated synchronously, and one is a distributed cache of non-communication represented by memcached.

Memcached design simple, excellent performance, complementary communication server cluster, massive data scalable architecture another site architects flock to.

    • Asynchronous operation

Using Message Queuing to make calls asynchronous, while improving site extensibility, can also improve the performance of your Web site system.

Without the use of Message Queuing, the user's request data is written directly to the database, and in high concurrency, the database is under great pressure, and the response latency is exacerbated. After the message queue is used, the data that the user requests is sent back immediately after the message queue, and then the consumer process of Message Queuing fetches the data from the message queue and writes the database asynchronously. Because the Message Queuing server is processing much faster than the database, the response latency of the user can be effectively improved.

Message Queuing has a good clipping-----------that is, by asynchronously processing, a transaction message with a short period of high concurrency is stored in the message queue, thus flattened the peak concurrent transaction.

    • Cluster

In high concurrency scenarios, a server cluster consisting of multiple servers is constructed for an application using load balancing technology, which distributes concurrent access requests to multiple servers, avoids the slow response of a single server due to excessive load pressure, and makes the user request a better response latency feature.

    • Code optimization

1. Multithreading

The main ways to solve the thread safety problem are to design the object as stateless object: it means that the object itself does not store state information, uses local objects, and accesses resources concurrently using locks.

2. Resource Reuse

When the system is running, it minimizes the creation and destruction of expensive system resources, such as database connection, network communication connection, thread, complex object, etc. There are two main modes of resource reuse: Singleton and object pooling. Object pool mode reduces object creation and resource consumption by reusing object instances.

3. Data structure

Reasonable use of the appropriate data structure in different scenarios, flexible combination of various data structures to improve data reading and writing and computing features can greatly optimize program performance.

4. Garbage collection

High-performance architecture for Web sites---Application Server performance optimization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.