10 suggestions for improving Web application performance by 10 times

Source: Internet
Author: User
Tags nginx server website performance

10 suggestions for improving Web application performance by 10 times

Improving the Performance of web applications is never more important than now. The proportion of the network economy has been growing; more than 5% of the global economy value is produced on the Internet (see the following information for data ). This time online hyper-connection world means that users' expectations are also at the highest point in history. If your website cannot respond in a timely manner or your app cannot work without delay, users will quickly jump to your competitors.

For example, a study done by Amazon 10 years ago proves that, even at that time, every 100 milliseconds of webpage loading time will increase revenue by 1%. Another recent study highlights the fact that more than half of website owners admitted in the survey that they would lose users due to application performance issues.

How fast does the website need? For page loading, 4% of users give up every 1 second. During the first interaction, pages of top-level e-commerce sites can be loaded from 1 second to 3 seconds, which provides the highest comfort speed. Obviously, this interest is very high for web applications and is constantly increasing.

It is easy to improve efficiency, but it is difficult to see the actual results. To help you on your journey of exploration, this article will provide you with 10 suggestions that can increase the website performance by up to 10 times. This is the first article in a series of articles about how to improve application performance, including fully tested optimization technology and a little help from NGINX. This series also provides potential help to improve security.

 

Tip #1: improve performance and increase security through reverse proxy

If your web application runs on a single machine, this method will significantly improve performance: you only need to change to a faster machine, better processor, and more memory, faster disk array, etc. Then the new machine can run your WordPress server, Node. js program, Java program, and other programs faster. (If your program needs to access the database server, the solution is still simple: Add two faster machines and use a faster link between the two computers .)

The problem is that machine speed may not be a problem. Web programs are slow because computers have been switching between different tasks: Accessing files from disks, Running code, and so on through thousands of connections and user interaction. The application server may jitter (thrashing)-for example, insufficient memory, swap memory data to the disk, and wait for multiple requests to complete a task, such as disk I/O.

You can use a completely different solution to replace the hardware upgrade: Add a reverse proxy server to share part of the task. The reverse proxy server is located at the front end of the machine running the application and is used to handle network traffic. Only the reverse proxy server is directly connected to the Internet; communication with the application server is completed through a fast internal network.

The reverse proxy server can free up the application server from waiting for users to interact with the web application, so that the application server can focus on building webpages for the reverse proxy server so that it can be transmitted to the Internet. The application server does not need to wait for the response from the client, and its running speed can be close to the optimized performance level.

Adding reverse proxy servers can also bring flexibility to your web Server installation. For example, if a certain type of server is overloaded, you can easily add another server. If a machine goes down, it can easily replace a new one.

Due to the flexibility brought by reverse proxy, reverse proxy is also a prerequisite for some performance acceleration functions, such:

  • Server Load balancer (see Tip #2)-Server Load balancer runs on a reverse proxy server to distribute traffic to a batch of applications. With an appropriate Server Load balancer, you can add an application server without modifying the application.
  • Cache static files (see Tip #3)-files that are directly read, slice or client code, can be saved on the reverse proxy server and then directly sent to the client, in this way, you can increase the speed, share the load on the application server, and make the application run faster.
  • Website security-the reverse proxy server can improve website security and quickly detect and respond to attacks, ensuring that the application server is protected.

The NGINX software is specially designed for use as a reverse proxy server. It also contains the above features. NGINX uses an event-driven method to process requests, which is more efficient than traditional servers. NGINX plus has added more advanced reverse proxy features, such as application health check, which is used to handle Request Routing, advanced buffering, and related support.

NGINX Worker Process helps increase application performance

 

Tip #2: Add a Server Load balancer instance

Adding a Server Load balancer instance is a simple method to improve performance and website security. Instead of increasing the number of core Web servers, it is better to use Server Load balancer to distribute traffic to multiple servers. Even if the program is not well written or there are difficulties in resizing, the Server Load balancer server can improve the user experience.

The server Load balancer is first a reverse proxy server (see Tip #1)-It Accepts traffic from the Internet and then forwards the request to another server. In particular, the Server Load balancer server supports two or more application servers and uses the allocation algorithm to forward requests to different servers. The simplest load balancing method is round robin. Each new request is sent to the next server in the list. Other replication balancing methods include sending requests to servers with the least active connections. NGINX plus has the ability to allocate sessions of specific users to the same server.

Server Load balancer can improve the performance because it can avoid overload of one server while other servers have no traffic to handle. It can also easily expand the server scale, because you can add multiple cheaper servers and ensure they are fully utilized.

Server Load balancer protocols include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, FastCGI, SCGI, uwsgi, memcached, and several other application types, including TCP-based applications and other 4th-layer protocol programs. Analyze your web application to determine what you want to use and where the performance is insufficient.

The same server or server group can be used for load balancing or other tasks, such as the SSL end server. It supports HTTP/1.x and HTTP/2 requests from the client, and cache static files.

NGINX is often used for load balancing. To learn more, you can download our ebook five reasons for choosing a software Load balancer. You can also learn about the basic configuration guide From configuring Server Load balancer using NGINX and NGINX Plus in section 1. The complete NGINX Server Load balancer document is provided in the NGINX Plus administrator guide .. Our commercial version of NGINX Plus supports more optimized Load Balancing features, such as loading routes based on server response time and load balancing on Microsoft's NTLM protocol.

 

Tip #3: cache static and dynamic content

Cache improves the performance of web applications by accelerating content transmission. It can adopt the following policies: pre-process the content to be transmitted when necessary, save the data to a faster device, and store the data closer to the client, or use these methods together.

There are two different types of data buffering:

  • Static content cache. Files that do not change frequently, such as (JPEG, PNG) and Code (CSS, JavaScript), can be saved on the peripheral server, so that they can be quickly extracted from memory and disk.
  • Dynamic Content cache. Many web applications generate a new HTML page for each webpage request. By simply caching the HTML content generated in a short period of time, you can effectively reduce the number of content to be generated, and these pages are new enough to meet your needs.

For example, if a page is browsed 10 times per second, you cache it for 1 second, and 90% of the requested pages are directly extracted from the cache. If you separate static content from the cache, even new pages may be composed of these caches.

The following are three major cache technologies invented by web applications:

  • Shorten the network distance between data and users. Copy a copy of the content to a node closer to the user to reduce the transmission time.
  • Increase the speed of the Content Server. Content can be saved on a faster server to reduce the time needed to extract files.
  • Remove Data from the overload server. The machine often performs a task at a lower speed than the test result because it has to complete some other tasks. Caching data on different machines can improve the performance of cached and non-cached resources, because the host is not used excessively.

The cache mechanism for web applications can be implemented within the web application server. First, cache dynamic content is used to reduce the time when the application server loads dynamic content. Second, cache static content (including temporary copy of dynamic content) is to further share the load of the application server. In addition, the cache will be transferred from the application server to a faster and closer machine for users, reducing the pressure on the application server and reducing the time for data extraction and transmission.

The improved cache solution can greatly improve the speed of applications. For most webpages, static data, such as large image files, constitutes more than half of the content. If there is no cache, it may take several seconds to extract and transmit this type of data, but it can be completed in less than one second after the cache is used.

For example, NGINX and NGINX Plus use two commands to set the cache mechanism: proxy_cache_path and proxy_cache. You can specify the cache location and size, the maximum time the file is stored in the cache, and other parameters. Use the third (and quite popular) command proxy_cache_use_stale. If the server that provides fresh content is busy or fails, you can even make the cache provide older content, in this way, the client will not get nothing. From the user's point of view, this can improve the availability time of your website or application.

NGINX plus has an advanced caching feature, including support for cache clearing and displaying cache status information on the dashboard.

To obtain

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.