Turn from: 782,389,871. Web Front end performance optimization in general, the Web front end refers to the website business logic before the section, including browser loading, site view model, image services, CDN services, etc., the main optimization means have optimized browser access, using reverse proxy, CDN and so on. 1: Browser access Optimization (1) Reduced HTTP request HTTP protocol is a stateless application layer protocol, which means that each HTTP request requires a CV communication link for data transfer, and on the server side, each HTTP needs to start a separate thread to process, these traffic and services are expensive, Reducing the number of HTTP requests can effectively improve access performance. The primary means of reducing HTTP requests is to merge the CSS, merge the JavaScript, and merge the images. Merges the javascript,css required by the browser into a single file so that the browser only needs one request at a time. Multiple images merged into one, if each picture has a different hyperlink, you can use the CSS offset to respond to mouse click action, construct a different URL. (2) Using browser caching for a Web site, Css,javascript,logo, icons, and other static resource file updates are relatively low frequency, and these files are almost every time the HTTP request is required, if you cache these files in the browser, can improve performance greatly. By setting the Cache-control and Expires properties in the HTTP header, you can set the browser cache, which can be cached for days or even months. Sometimes, static resource file changes need to be applied to the client browser in a timely manner, which can be achieved by changing the file name, for example, by adding a version number behind JavaScript, so that the browser refreshes the modified file. (3) Enable compression on the server side of the file compression, the browser side of the file decompression, can effectively reduce the amount of data transmitted by the traffic. Compression efficiency of text files more than Hkust 80%. (4) CSS on the top of the page, JavaScript placed at the bottom of the page browser will download the full CSS after the entire page rendering, so the best way is to put CSS on the top of the page, so that the browser to download the CSS as soon as possible. JS is the idea that the browser in the loading JS immediately after the execution, there may be blocking the entire page, resulting in slow page display, so JS best placed at the bottom of the page. (5) Reduce cookie transmission on the one hand, cookies are included in each request and response, and too large a cookie can seriously affect the data transfer, so what data needs to be written to the cookie requires careful consideration to minimize the amount of data transferred in the cookie. On the other hand, for some static resource access, such as CSS,JS, sending cookies is meaningless, you can consider static resources using independent domain name access, avoid requesting static resources to send cookies, reduce the number of cookie transmission. 2.CDN Acceleration CDN (Content Distribute network, memory distribution networks) is still essentially a cache, and the data is cached in the closest place to the user, the user gets the data at the fastest speed, the so-called network access first hop. CDN generally caches static resources, tablets, files, css,script scripts, static Web pages, etc., but these files are accessed very frequently, and caching them in a CDN can greatly improve the speed of Web page opening. 3. Reverse proxy The traditional proxy server is located on the browser side, the proxy browser sends the HTTP request to the Internet, and the reverse proxy server is located on the side of the Web site, and the proxy Web server receives the HTTP request. As with traditional proxy servers, the reverse proxy server also has the role of securing the Web site, and access requests from the Internet must go through a proxy server, which is equivalent to creating a barrier between the Web server and possible cyber attacks. In addition to the security features, the proxy server can also be configured to speed up Web requests by configuring caching, and when the user accesses static content for the first time, the static content is cached on the reverse proxy server, so that when other users access the static content, they can return directly from the reverse proxy server to speed up the response of the Web request To reduce the load on the server. Two. Application Server performance Optimization Application Server is the server that handles the website business, the business code of the website is deployed here, is the website development most complex, the change most place, the optimization means mainly has the cache, the cluster, the asynchronous and so on. 1. Distributed cache The first solution that comes to mind when a Web site encounters a performance bottleneck is to use the cache. In the entire Web site application, the cache is almost ubiquitous, both in the browser and in the application server and database server, both for data caching, file caching, and page fragments can be cached. Reasonable use of the cache, the site performance optimization is significant. The essence of caching is a memory hash table, in which the data cache is stored in a memory hash table as a pair of key,value. The cache is mainly used to store data that reads and writes, rarely changes, such as the category information of the product, the search list information of the popular words, the popular commodity information and so on. When the application reads data, it reads from the cache, accesses the database if it is not read or the data is invalidated, and writes the data to the cache. Reasonable use of caching to improve system performance has many benefits, but the unreasonable use of the cache not only can not improve the systems, but also become cumbersome, or even risk. (1) Frequently modified data if the cache is stored frequently modified data, there will be data write cache, the application is too late to read the cache, the data has been invalidated, the system burden increased. (2) The access cache without hotspots uses memory as storage, memory resources are valuable and limited, it is not possible to cache all the data, only the latest access to the data cache, and the history of theThe cache is cleared. If the application system accesses data without hotspots and does not follow the 28 law, then the cache is meaningless. (3) Inconsistent data and dirty reads typically set the expiration time on the cached data and reload it from the database once the expiration time is exceeded. Therefore, the application should tolerate inconsistent data for a certain time. If the seller has edited the product attributes, it will take a while for the buyer to see it. Another strategy is to update the cache as soon as the data is updated, but this also leads to more overhead and transactional consistency issues. (4) Cache availability in general, caching serves as a way to improve system performance, even if the cache data is lost or unavailable and does not affect the availability of the system. But with the development of the business, the cache will bear the most of the pressure of data access, the database has become accustomed to the day of the cache, so when the cache service crashes, the database will not be able to withstand such a large pressure to go down, resulting in the entire site is not available, this situation is known as a cache avalanche. In practice, some websites improve cache availability by means of cache hot spares. With distributed cache server clusters, caching data is distributed across multiple servers in the cluster to improve the availability of the cache to some extent. (5) Cache warm-up cache is stored in hot-spot data, hot data is the cache system using LRU (recently drunk unused algorithm) to continuously access the data screening out, the process takes a long time. The newly-started cache system if there is no data, in the process of rebuilding the cached data, the performance of the system and the database load is not good, it is best to load the hotspot data when the cache system starts, this cache pre-load method is called cache preheating. (6) Cache penetration If a non-existent data is requested because of improper business, or if a malicious attack persistently high and concurrent, because the cache does not save the data, all requests will fall on the database, which can cause great pressure on the database and even crash. A simple countermeasure is to cache data that does not exist (its value is null). 2. Asynchronous operations that use Message Queuing to make calls asynchronous can improve the extensibility of the Web site. In fact, using Message Queuing can also improve the performance of your Web site system. Without the use of Message Queuing, the user's request data is written directly to the database, and in high concurrency, the database is under great pressure, and the response latency of the colleague is increased. After a message queue is used, the data that the user requests is sent back to the message queue, and the consumer process of Message Queuing fetches the data from the message queue and writes the database asynchronously. Because the Message Queuing server is processing much faster than the database, the corresponding delay for the user can be effectively improved. In addition, Message Queuing has a good clipping effect, that is, through asynchronous processing, the transaction messages generated by the short-time high concurrency are stored in the message queue, thus flattened the peak concurrent transactions. 3. Using clustering to create a server cluster of multiple servers for an application using load balancing technology in a high-concurrency site-access scenario, andSend access requests to a polymorphic server to avoid a single server being slow to respond to heavy load pressures, which makes the user request a better latency feature. 4. Code optimization (1) using multithreading technology to improve the corresponding speed, the main Web application server is multi-threaded to respond to concurrent user requests, so web development is naturally multithreaded programming. One problem is how to set a reasonable number of threads in a multi-threaded task. This mainly depends on what type of task we have, if the task is CPU-based (we call it CPU-intensive), then the number of threads is not more than the number of CPU cores, because there are more threads to boot, the CPU is too late to dispatch, but if the task needs to wait for disk operation, the network response ( We call it io-intensive), so multiple boot threads help increase task concurrency, tune system throughput, and improve system performance. (2) resource reuse, a common resource reuse scenario is a singleton object and object pool, which includes a common thread pool and database connection pool. Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced. 78238987
Improve site performance