Https://github.com/Findow-team/Blog/issues/11?utm_source=tuicool&utm_medium=referral
2017 front-end Performance tuning checklist
Have you started using progressive start? Have you already used tree-shaking and code-splitting two tools in react and angular? Have you used compression techniques such as Brotli, Zofli, and Hpack, or the OCSP Protocol (online Certificate status protocol)? Do you know about resource reminders, client reminders, and CSS containment-type technologies? Understand the IPV6,HTTP/2 and service worker protocols?
Looking back on those years, people tend to think about performance after they've finished the product . Often drag performance-related things to the end of the project and do nothing but tweak config
, concatenate, optimize, and partially tweak the files on the server. And now, technology has changed dramatically.
The performance of a project is very important, in addition to the technical level of attention, but also at the beginning of the design of the project to be considered, so that the performance of the various hidden requirements of the perfect integration into the project, along with the project to advance. Performance is best characterized by quantifiable, measurable, and customizable features. The network is becoming more and more complex and monitoring the network becomes more and more difficult because the monitoring process is greatly affected by the performance implications of devices, browsers, protocols, network types, and other technologies (CDN,ISP, caches, proxy servers, firewalls, load balancers, and servers).
Below is a 2017-year front end performance Tuning Checklist that illustrates the issues we need to consider as a front-end developer to ensure feedback speed and browser compatibility.
(You can also download checklist PDF or check in Apple Pages.) Optimization Hooray! )
Body
Micro-optimization is the best way to maintain performance, but it does not have too clear a goal of optimization, because too clear a goal will affect every decision made in the project. Here are some different models, please read them in the order of your own comfort.
Please be prepared and set your goals! 1.20% faster than your strongest competitor
According to a psychology study, your site is at least 20% faster than others to make users feel that your site is faster than others. This speed is not the load time of the entire page, but the start of loading the rendering time, the first effective rendering time (such as the time the page needs to load the main content), or the interaction time (refers to the page or the application of the main page load is completed, and the master prepared to interact with the user time).
Measure start rendering time (with Webpagetest) and first page effective rendering time (with lighthouse) on Moto G (or midrange Samsung devices) and Nexus 4 (more mainstream devices), preferably in an open lab, using standard 3g,4g and Wi-Fi links.
Lighthouse, a new performance review tool developed by Google
You can use your analysis report to see which stage your users are in, and choose the top 90% of the user experience to do the testing. Then collect the data, build a table, sift through 20% of the data and preset a target (e.g., performance budget). Now you can compare the above two values to detect. If you always maintain your goals and speed up the interaction time by changing the script by 1.1 points, the above method is reasonable and feasible.
Performance analysis created by Brad Frost
Share this list with your colleagues. First, make sure everyone on your team is familiar with the list. Every decision in the project affects performance, and if front-end engineers are actively involved in the project concept, UX and visual design decisions, this will bring huge benefits to the project. The decision of the map design is contrary to the performance concept, so his order in this list has yet to be considered.
2. The reaction time is 100 milliseconds, the frame number is 60 frames per second
The Rail performance model provides you with an excellent goal to do your best to provide feedback within 100 milliseconds of the user's initial operation. To achieve this goal, the page needs to discard permissions and return the permissions to the main thread within 50 milliseconds. For high-pressure points like animations, the best way is to do nothing, because you can never reach the minimum absolute value.
Similarly, each frame of the animation needs to be completed in 16 milliseconds, which guarantees 60 frames per second (/60=16.6 milliseconds), preferably within 10 milliseconds if possible. Because the browser needs a certain amount of time to render new frames on the screen, and your code needs to be executed within 16.6 milliseconds. Note that the above targets should be used to measure the performance of the project, not load performance.
3. The first effective render time is less than 1.25 seconds, the speed index is lower than 1000
Even if this goal is very difficult to achieve, your ultimate goal should be to start rendering time below 1 seconds and speed index below 1000 (at fast speeds). For the first effective render time, the upper limit is preferably 1250 milliseconds. For mobile, it is acceptable for the mobile device to render at least 3 seconds for the first time in 3G. It's okay to be a little taller, but don't be too tall.
Define the environment that you need 4. Select and install your development environment
Don't focus too much on the most popular tools of the moment, and stick to the tools that fit your development environment, such as Grunt, Gulp, Webpack, POSTCSS, or a combination of tools. As long as the tool runs fast enough and does not cause too much trouble for your maintenance, that's enough.
5. Progressive Enhancement (progressive enhancement)
When building a front-end structure, you should always use progressive enhancement as your guiding principle. Design and build the core experience first, then refine the experience of those advanced features designed for high-performance browsers and create a resilient experience. If your Web page can run fast on slow computers with low-speed networks and older displays, it will only run faster on fiber-optic high-quality computers.
6. Angular,react,ember, etc.
It is best to use frameworks that support server-side rendering. When using a frame money, first record the server and client boot time, remember to test on the mobile device, finally to use a framework (because of the performance problem, if you use a framework, then make changes is very difficult). If you use JavaScript frameworks, make sure your choices are widely used and tested. Different frameworks have varying degrees of impact on performance and different optimization strategies, so it's best to have a clear understanding of each aspect of the framework you're using. You can read the PRPL pattern and application Shell architecture when you write a Web application.
This diagram depicts the PRPL pattern
is a application shell, a small, html,css and JavaScript-made user interface
7. Amp or Instant Articles?
Depending on your overall organizational priorities and strategy, you might consider using Google's amp or Facebook's Instant articles. You can achieve good performance without these, but AMP can provide a good performance free content distribution network framework (CDN), Instant articles can promote your performance on Facebook. You can also build progressive web AMP.
8. Choose the CDN that's right for you
Depending on your dynamic Data volume, you can outsource some of the content to a static site generation tool, place it on a CDN, generate a static version from it, and avoid requests from those databases. You can also choose a CDN-based static host platform to enrich your page (Jamstack) through interactive components.
Note whether the CDN can handle (or divert) dynamic content well. There's no need to simply limit your CDN to static. Repeatedly check that the CDN performs the compression and conversion of content, check the intelligent HTTP/2 Transport and cache server (ESI), and note which static or dynamic portions are at the edge of the CDN (the server closest to the user).
Start Optimization 9. Direct Determination of optimization order
The first thing you should do is to figure out what the problem is. Check all your files (JavaScript, images, fonts, third-party script files, and important modules on the page, such as carousel, complex information icons, and multimedia content) and categorize them.
Make a list of the tables. Identify the underlying core content that should be on the browser, which parts of the upgrade experience are designed for high-performance browsers, which are additional content (those that are unnecessary or can be loaded lazily, such as fonts, unnecessary styles, carousel components, players, social portal portals, large images). For more details please refer to the article "improving smashing Magazine's performance".
10. Use of standards-compliant technology
Use standards-compliant technologies to provide a core experience to outdated browsers, provide an enhanced experience to older browsers, and have strict control over what is loaded. That is to load the core experience part first, put the enhancements in DomContentLoaded
, and send the extra content in the load
event.
We used to infer the performance of the device from the browser version, but now we can't speculate. Because many cheap Android phones on the market now do not take into account memory limitations and CPU performance, use the high-version Chrome browser directly. It is important to note that when we have no other choice, the technology we choose may also be our limit.
11. Consider micro-optimization and progressive start
In some applications, you can initialize the app before rendering the page. It is better to show the frame first rather than a progress bar or indicator. Use modules or techniques that can speed up initial render times (such as tree-shaking and code-splitting) because most performance issues come from the initial analysis time of the application bootstrapper. You can also compile ahead of time on the server, thereby easing the rendering process for some clients and thus quickly outputting results. Finally, consider using optimize.js to speed up the initial load, which is the principle of a high-priority calling function (though it is not necessary now).
Progressive start refers to the use of server-side rendering to quickly get the first effective rendering, which also includes a small subset of JavaScript files to make the interaction time as close as possible to the first effective rendering time.
is client rendering or server-side rendering used? Either way, our goal is to build a progressive start: Using server-side rendering can give you a very short first-time effective rendering, which also includes a small subset of JavaScript files to make the interaction time as close to the first effective rendering time as possible. Next, add as many nonessential features as possible to your application. Unfortunately, as Paul Lewis says, the framework basically does not have a priority concept for developers, so progressive booting is difficult to implement in many libraries and frameworks. If you have time, consider using strategies to optimize your performance.
is the HTTP cache header used reasonably?
Check carefully for example expires
, cache-control
max-age
and whether other HTTP cache headers are being used correctly. In general, a resource can be cached in a short time (if it is changed frequently) or in an indeterminate time (if it is static)-you can change the version in the URL when you need it.
If possible, use cache-control:immutable designed for fingerprint static resources to avoid secondary validation (December 2016, only Firefox is https://
supported in processing). You can use the Heroku primer on the HTTP caching headers,jake Archibald "caching Best Practices", and Iiya Grigorik's HTTP caching Primer as a guide.
13. Reduce the use of third-party libraries, loading JavaScript asynchronous operations
When the user requests the page, the browser crawls the HTML and generates the DOM, crawls the CSS and builds the CSS object model, and finally builds the render tree by matching the DOM and CSS objects. The browser does not begin rendering the page until the JavaScript files that need to be processed are resolved. As a developer, we want to tell the browser explicitly not to wait and start rendering directly. The specific method is to use the defer
and async
two attributes in HTML.
In fact, it is defer
better (because for IE9 and the following users, it is very likely that the script will be interrupted for IE9 and the following users). At the same time, reduce the use of third-party libraries and scripts, especially social networking sites for sharing keystrokes and <iframe>
embedding (such as maps). You can use static social networking sites to share keystrokes (such as SSBG) and static links to interactive maps to replace them.
14. Is the picture optimized correctly?
Use responsive images with, and elements as much as possible srcset
sizes
<picture>
. You can also use <picture>
the WEBP format of the elements, using the JPEG format as a substitute (see Andreas Bovens's code snippet) or using content negotiation (using the Accept header). Sketch originally supported WEBP,WEBP images can be directly exported to Photoshop WEBP plugin. There are, of course, many other ways.
Response Picture Breakpoint Builder to automatically process pictures
You can also use the client prompt, and now the browser can do it. When too few source files are used to generate a response picture, use the response Picture Breakpoint Generator or a service like cloudinary to automatically optimize the picture. In many cases, the use alone sreset
and sizes
all bring great benefits. On this site, we add suffixes to the file- -opt
for example brotli-compression-opt.png
, so that everyone on the team knows that the most images are optimized.
15. Further optimization of the picture
When you are writing the login interface, you find that the picture on the page is loaded very quickly, you need to confirm that the JPEG format file has been optimized and compressed by mozjpeg (which can manipulate the scan level to improve rendering time), the PNG format corresponds to pingo,gif need to use lossy Gif,svg to use SVGOMG. Blur the parts of the image that are not important (using the Gaussian blur filter to process them), thus reducing the file size, and finally you may want to color the picture into black and white, thus reducing the amount of capacity. For background images, it is absolutely acceptable to use Photoshop to keep the quality output of 0到10%.
If you don't feel enough, you can use multiple background image techniques to improve your image's perceived performance.
16. Are Web fonts optimized?
The services you use to decorate your Web page fonts are likely to be useless, including glyphs and extra features. If you are using an open source font, try to compress the file size by using a small subset of the font library or by classifying yourself with a small subset (for example, by referencing Latin with some special phonetic notation). WOFF2 support is a very good choice, if the browser is not supported, then you can use Woff and OTF as backup. You can also select an appropriate policy from the Zach Leatherman "comprehensive Guide to Font-loading strategies" article, and then use the server to cache the fonts. If you want to get started quickly, Pixel Ambacht's tutorials and cases will make your fonts altogether orderly.
Zach Leatherman's "Comprehensive Guide to Font-loading Strategies" provides a dozen options for making font transfer even better
If you are using a third-party server host, you do not have to work on the server on the font, be sure to look at the Web font Loader. FOUT is better than Foit, the text is rendered immediately in an alternative situation, and the font is loaded asynchronously-you can also use LOADCSS to implement this. You may also avoid installing fonts on the local OS.
17. Quickly execute key parts of CSS
To make sure that your browser renders your page as quickly as possible, first collect the CSS file (also called the deterministic CSS or the top half CSS) of the first visible part of the page to render it, and then add it to the part of the page to avoid duplication. The size of your key CSS files is also limited to around 14KB because of the slow boot phase restrictions on the size of the swap packet. If above this value, the browser needs to repeat some steps to get more styles. Key CSS is what allows you to do this. This action may be required for each template. If possible, consider the conditional introverted method used by Fiament group.
With HTTP/2, the key CSS can be saved as a CSS file, transmitted through the server, and can avoid HTML bloat. Server transport lacks continuous support, and there are some hyper-cache issues (the first 144 pages of the Hooman Beheshti demo). In fact, this can cause the network buffer to swell. Because of the slow start of TCP, server transfers are more efficient under a stable connection. So you may need to create a HTTP/2 server transport mechanism with a cache. Keep in mind, however, that the new cache-digest
rules negate the manually created requests for servers that need to be cached.
18. Reduce net load through tree-shaking and code-splitting
Tree-shaking is a way to clean up your build process by preserving the code that is really needed during the project. You can use Webpack to put forward those useless live configuration files, use UNCSS or helium to remove unwanted styles from CSS. Similarly, consider how to write an efficient CSS selector and how to avoid bloat and high-cost styles.
Code-splitting is another webpack feature that separates your code according to the blocks that are loaded on demand. As long as you define the separation point in the code, the Webpack will process the relevant output file. It basically ensures that the content you initially download is small and requests code as needed when the requirements are added.
The results displayed by rollup are much better than those shown in the Browserify configuration file. So when we want to use a similar tool, perhaps we should look at rollupify, which turns the ECMAScript2015 module into a larger COMMONJS module--Because the small modules have unexpected high performance costs, which stems from your choice of packaging tool module systems.
19. Improve Rendering Performance
Using CSS-like containment to isolate high-consumption builds, which limits the scope of browser styles, can be useful in the layout and decoration of work for canvas-free browsing, or on third-party tools. To ensure that there is no delay in scrolling and animating the page, don't forget the 60 frames per second principle mentioned earlier. If there is no way to do this, make sure that the frame number per second is roughly within 15 to 60 frames. Use Will-change in CSS to inform the browser which elements and attributes have changed.
Also remember to measure performance in rendering execution (you can use Devtools). You can refer to Paul Lewis ' free course on udacity-Browser rendering optimization as a primer. There is also an article Sergey Chikuyonok about how to use GPU animations correctly.
20. Preheat the network connection to speed up the transfer
Use the page frame to build lazy loads on high consumption (fonts, JS files, looping, video and inline frames). Use resource hints to save on dns-prefetch
(refers to performing DNS retrieval in the background), (refers to requiring the browser to perform a preconnect
handshake link in the background (DNS,TCP,TLS)), prerender
(which requires the browser to render a specific page in the background), preload
(refers to the early removal of the source files that are temporarily not executed). Depending on your browser's support, use it as much as you can, and use preconnect
dns-prefetch
prefetch
and prerender
be especially careful-the latter ( prerender
) is only used if you are very confident that the user is next, such as in the purchase process.
http/221. Get ready to use HTTP/2.
When Google starts working toward a safer web page and defines all of the HTTP pages on Chrome as "unsafe," you might want to consider whether you should continue to use http/1.1 or turn your gaze to the HTTP/2 environment. Although the initial investment is relatively large, using http/is a big trend, and after mastering it, you can use service worker and server push technology to improve the performance of the line.
Now Google plans to mark all HTTP pages as unsafe and set the HTTP security indicator to the same icon as the red triangle that Chrome uses to intercept HTTPS.
The disadvantage of using the HTTP/2 environment is that you are moving to HTTPS, and depending on the number of users you http/1.1 (mainly those who use outdated operating systems and outdated browsers), you need to adapt to different construction processes to send different constructs. Note: Both the migration and the new build process can be tricky and time-consuming.
22. Appropriate deployment of HTTP/2
Again, before using the HTTP/2 protocol, you need to thoroughly troubleshoot the protocol you have used so far. You need to find a balance between packing the build and loading many groups at the same time.
On the one hand, you might want to avoid linking a lot of resources, rather than splitting all of your interfaces into smaller modules, compressing them as part of the construction process, and then giving them retrievable information and loading them. In this case, you will no longer need to re-download the entire style list or JavaScript file for a file.
On the other hand, encapsulation is necessary because it can be problematic to send too many JavaScript files to the browser at one time. First, compression can cause damage . Thanks to dictionary reuse, compressing large files does not cause damage to files, and compressing small files is not. There are ways to solve this problem, but this is not the scope of this article. Second, browsers are not optimized to be able to optimize similar workflows. For example, Chrome raises interprocess communication (IPCs), which is proportional to the number of resources, and these hundreds of thousands of resources consume a large amount of browser execution time.
Chrome's Jake Archibald suggests that in order to achieve the best results with HTTP/2, consider stepping through the CSS file
Of course you can consider loading the CSS file incrementally. Obviously, you're doing this for http/1.1 users, so you might need to build multiple versions of the browser to handle your scheduling process, which can make the process a little more complicated. You can also avoid merging HTTP/2 connections, while benefiting from HTTP/2 to use domain fragmentation, but it is difficult to implement.
What exactly are we supposed to do? If you have used HTTP/2 roughly, it seems that you have successfully sent 10 or so packages (it's good to run on browsers). Then you can start experimenting and find a balance point for your website.
23. Ensure the server is safe and reliable
All browsers support HTTP/2 and use TLS, which is where you might want to avoid security warnings and remove useless elements on the page. Check that your safety head is set correctly, exclude known defects, and check the certificate.
If you haven't migrated to HTTP yet, you can start by looking at the HTTPS guidelines (the Https-only standard). Ensure that all external plug-ins and monitoring scripts are properly loaded by HTTPS, ensuring that no cross-site scripts are present, and that the security headers and content security headers that are transmitted by the HTTP script are set correctly.
24. Does your server and CDN support HTTP/2?
Different servers and CDN have different compatibility with HTTP/2, can you use TLS fast enough? article to see what you have to choose.
Is TLS Fast yet? Lets you see the tools you can use when using HTTP/2 with your server and CDN
Brotli and Zopfli are two compression algorithms still in use?
In 2015, Google introduced Brotli, a new open source lossless data format that has been well-compatible with Chrome,firefox and opera. When used specifically, Brotli shows a much higher efficiency than gzip and deflate. It may be slower to compress depending on the configuration, but slower compression will eventually increase compression efficiency. And it's very quick to decompress. Because this algorithm comes from Google, it's not surprising that the browser only uses it when the user accesses the page over HTTPS. The hidden danger of Brotli is that it cannot be preset on most servers today, and it is difficult to deploy without nginx or Ubuntu. But you can actually use Brotli (via service worker) on a CDN that doesn't support it.
You can look at the Zopfli compression algorithm as an alternative, which encodes the data into Deflate,gzip and zlib formats. Any specification of GZIP compression resources benefit from ZOPFLI improved deflate encoding because the file will be 3%-8% smaller than the zlib compressed maximum file. The problem is that the file consumes about 80 times times the time it takes to compress. This is why it is a good choice to use Zopfli in a way that does not become a resource, such files are generally compressed once and downloaded several times.
is OCSP binding available?
Having the server use OCSP stapling can increase the speed of your TLS handshake. The Line Certificate Status Protocol (OCSP) is created as a substitute for the certificate abandoned list protocol. Two protocols can be used to detect if an SSL certificate has been canceled. However, OCSP does not require the browser to spend time downloading and scanning the list of certificate information, so it can reduce the handshake time.
27. Do you start using IPV6?
Because we have no IPV4 address available, and the mobile network has mostly started using IPV6 (the United States already has 50% of the entrance to use IPV6), it is a good choice to upgrade your DNS to IPV6 for future planning. To ensure that the network can support the dual-stack protocol-it needs to allow IPV6 and IPv4 to run simultaneously. After all, IPV6 is not just a backup plan. And research shows that with Neighbor Discovery (NDP) and route optimization, sites using IPv6 are 10% to 15% faster than regular websites.
28. Do you use Hpack?
If you are using HTTP/2, see if your server has performed hpack to compress the HTTP response headers to reduce unnecessary consumption. Because the HTTP/2 server is relatively new, some specifications like Hpack are not yet supported. We can use H2spec to check if Hpack is working.
H2spec's
Does service workers provide a pre-set mechanism for hyper-cache and network?
A network that is not optimized can run faster than the local cache of the user's machine. If your site is running on HTTPS, you can refer to the "Pragmatist's service workers manual" and then put the static resources in the service worker's cache, the offline presets (even offline pages) exist in the user machine for easy retrieval, This is more efficient than multiple network connections. You can also refer to Jake's offline user manual and the free udactiy course "offline Web Apps". If the browser supports it, it will be better, and the presets can be used anywhere instead of the network.
Testing and monitoring 30. Listen for warnings in mixed content
If you have recently completed an HTTP-to-HTTPS migration, you can listen to both active and passive mixed content warnings like the Report-uri.io. You can also use the hybrid content scanner to scan Web pages that you use HTTPS.
31. Is your development process optimized using Devtools?
Choose a debugging tool to check each button. Make sure you understand how to analyze rendering performance and console output, understand how to debug JavaScript, and edit CSS styles. Umar Hansa recently had a share of debugging and testing using Devtools, including some less commonly used techniques and techniques.
32. Have you tested using a proxy browser and an outdated browser?
Just using Chrome and Firefox testing is not enough. You should also see how your Web pages run on the proxy browser and the outdated browser. such as UC Browser and opera Min, their share of the Asian market is very high (up to 35%). In the promotion, use the average speed of the target customers in the country to test, to avoid large errors in the future. The same is true not only in the throttling network, but also in the simulation of high-data processing equipment, and in real-world equipment testing.
33. Is there any continuous monitoring?
For fast, unrestricted testing, it is best to use a personal webpagetest instance. Establish a performance budget monitor that can be automatically alerted. Establish your own user time markers to measure and monitor specific commercial data. Use Speedcurve to monitor performance changes while using new relic to get data that webpagetest cannot provide. Speedtracker,lighthouse and calibre are good choices.
Quick Start
This list is very comprehensive, and almost all of the available optimization methods are described. So, if you have only one hours to optimize, what should you do? Let's summarize 10 of the most useful . Don't forget to record your results before and after you start optimization, including the start rendering time and the speed at 3G, limited connection.
- With a wired connection, your goal is to reduce the start rendering time to 1s, while the 3G goal is to stay at 3s and the Speedindex value remains at 10,001. Optimize for starting rendering time and interaction time.
- Prepare the key CSS files for your main template and place them in the page
(you can use 14KB).
- For your own and third-party script files, load them as late as possible-especially social networking sites buttons, players and high-consumption JavaScript files.
- Use faster,
dns-lookup
, preconnect
prefetch
preload
and prerender
Add resource hints to speed up the transfer. - Asynchronously loads (or uses system fonts instead) as a subset of a font-type property.
- Optimize your pictures and consider using WEBP (such as landing pages) on key pages.
- Make sure that the HTTP cache header and the security header are set correctly.
- Use the Brotli or Zopfli compression algorithm on the server. (If you don't support these two, try Gzip)
- If HTTP/2 is available, use the Hpack compression algorithm and listen for warnings of mixed content. If you are using LTS, you can use OCSP binding.
- If possible, cache similar fonts, JavaScript files, and pictures in the service worker cache-in fact, the more the better!
ConclusionSome of the optimizations mentioned may be beyond the scope of your work, some of which may exceed your budget, or even the death penalty for some outdated code you're using. But it doesn't matter, this article is just a general outline (want to be able to do a comprehensive strong), you should according to your work environment to make a list of suitable for yourself. Most importantly, before you start optimization, have a clear understanding of the issues that exist in your project. Finally, I wish you a smooth 2017!
2017 front-end Performance tuning checklist