E-commerce website HTTPS Practice Road (iii)-Performance optimization Chapter

Source: Internet
Author: User
Tags http cookie session id ticket http 2 http strict transport security oscp cipher suite

By analyzing the details of the TLS handshake process, we find that HTTPS increases multiple RTT network transfer times over HTTP, increasing the service-side overhead and slowing down the client response time. Therefore, performance optimization is an essential task. Many articles are focused on performance optimization on the server side, but for the e-commerce industry, most of the user traffic originates from the app, so the client's performance is optimized to match the service side to maximize the benefits.

1. The burden of HTTPS

There are two sides to everything.

1.1 Increased transmission delay

The

increased overhead with HTTPS transport is more than just the two-time TLS handshake process. Optimize performance first of all to the enemy. Understand where performance loss can be targeted for deployment.
for the user, using an HTTP request, the first request, as long as the server side TCP three handshake to establish a connection, you can begin to apply data transfer.

and for HTTPS, things are not that simple.

1. Users are accustomed to using HTTP to request your site. To protect the user's security, first let the user force 302/301 to HTTPS. This jump increases the delay of at least 1 RTT;
2.302 Jump to TCP once again, add 1 RTT delay;
3. Start the two-phase TLS handshake, as shown in the details to increase the delay of at least two rtt.

- Client Hello : The client starts a new handshake and provides its own supported functionality to the server;
- Server Hello : Service-side Select connection parameters; br>- certificate* : The server sends the certificate chain;
- serverkeyexchange* : The server sends the public key Key) generates additional information such as the master key (Premaster Secrect) to the client;
- Serverhellodone : server-side notification completes the negotiation process;
- Clientkeyexchange : The client sends the encrypted master key to the server server
- [Changechiperspec] : Client If you want to switch encryption mode to notify server
- finished : Client finishes
- [Changechiperspec] : server if you want to switch encryption mode to notify client
- finished : Server Completion
4. In addition, if the client obtains the certificate chain information for the server for the first time, it also needs to verify the revocation status of the certificate through OSCP, and requires at least 1 RTT delay.
5. Finally, the transfer of the application layer data begins.

1.2 Additional overhead on the service side

During the TLS handshake, key exchange and encryption will incur additional computational overhead for the CPU. Different algorithms (authentication algorithms, key exchange algorithms, cryptographic algorithms) have different costs. For example, the 2048-bit RSA as the key exchange algorithm for the CPU pressure will be very large, and ecdhe_rsa (Elliptic curve key exchange) is much less expensive, RSA can still remain for authentication.
Of course, no matter how many optimization algorithms are chosen, the overhead is not to be avoided, as shown in.

2. Service-side performance optimization

Service-side performance optimization, mainly embodied in the optimization of Web server configuration, we take the Ngnix 1.11.0 version as an example. Of course you can also choose Apache, H2O and so on.

Rational use of 2.1 hsts

HSTS (HTTP Strict Transport Security, HTTP Strict transport protocol) indicates that the Web site has implemented TLS, requiring the browser to rewrite the URL of the user's plaintext access to HTTPS, avoiding the delay overhead of always forcing 302 redirects.
HSTs's implementation principle is: When the browser first HTTP request server, the return of the response header added Strict-Transport-Security , tell the browser within a specified time, the site must be accessed through the HTTPS protocol. That is, for the HTTP address of this website, the browser needs to be replaced with HTTPS now and then send the request.
Its configuration is shown below. max-ageindicates the HSTs cache time in the browser, the includeSubdomainscam parameter specifies that HSTs should be enabled on all subdomains, the preload parameter represents preload, and is explained later.

add_header Strict-Transport-Security"max-age=63072000; includeSubdomains; preload"

On Caniuse We can query the browser support level of the HSTs protocol:

There are still some notable issues in the use of HSTs:
1. HSTs treats all certificate errors as fatal. Therefore, once the primary domain uses hsts, the browser discards the connection to all invalid certificate sites for the domain name.
2. The first visit still uses HTTP before the HSTs can be activated. How can security for first-time access be resolved? It can be preloading mitigated by pre-loading a list of websites that support HSTs with the browser vendor. Currently, Google has provided online registration services https://hstspreload.appspot.com/
3. How do I revoke hsts? Strict-Transport-Security: max-age=0 You can revoke hsts by setting the cache to 0. However, it only takes effect if the browser accesses the Web site again and responds to the update configuration.

2.2 Proper use of Session recovery

The session recovery mechanism refers to the security parameters of the session that the client and server store for a period of time when a full negotiated connection is dropped. For subsequent connections, both parties use a simple handshake to recover the previously negotiated session. Greatly reduces the overhead of the TLS handshake.
The session recovery scenario can be divided into two types: the session ID and the conversation ticket (session Ticket). The session ID assigns a unique identity to the session through the server and caches the session state. During the first full negotiation, the session ID is sent back to the client in the Serverhello message. The client wishing to resume the session puts the session ID into ClientHello in the next handshake, and then encrypts it using the previously negotiated master key after the server acknowledges it. The session ticket keeps all session state on the client (similar to an HTTP Cookie).
1. Configuring a session ticket is simple:

on;ssl_session_ticket_key /usr/local/nginx/ssl_cert/session_ticket.key;

The command to produce key is generated via OpenSSL:

openssl rand –out session_ticket.key48

Note that the key value remains consistent in the cluster. also note that before using a session ticket, you need to turn on the key suite that supports forward-only encryption support.
2. Configure session ID Note that session state is stored on the server, how to guarantee the session ID hit rate in the cluster state? The simplest approach is that the polling policy of the payload uses Ip_hash to ensure that the same client is always distributed to the same node in the cluster, but this is not flexible enough. Therefore, it is necessary to use distributed caching to store session state in a cluster-shared redis.
How to operate the TLS session information in Nginx, you can refer to the module in Openresty ssl_session_fetch_by_lua_block . See Https://github.com/openresty/lua-nginx-module#ssl_session_store_by_lua_file for details.

rational Use of 2.3 Ocsp stapling

The OCSP (online Certificate status Protocol, on-line Certificate Status protocol) is used to query revocation information for a certificate. OCSP real-time queries increase the client's performance overhead. Therefore, it is possible to consider an OCSP stapling solution: OCSP stapling is a protocol feature that allows revocation information to be included in the TLS handshake, and after enabling OCSP stapling, the server can perform the detection of certificate revocation status in place of the client. And all information is returned to the client during the handshake process. The increased handshake information size is within 1KB, but eliminates the time that the user agent independently verifies the revocation status.
There are a number of ways to enable OCSP stapling, such as online validation. This method needs to support the server to proactively access the Certificate verification server to take effect, and every time the Nginx restart will be actively requested once, if the network will cause the Nginx boot slow.

# 启用OCSP staplingssl_staplingon;# valid表示缓存5分钟,resolver_timeout表示网络超时时间resolver8.8.8.88.8.4.4223.5.5.5 valid=300s;resolver_timeout5s;         # 启用OCSP响应验证,OCSP信息响应适用的证书   ssl_stapling_verifyon;  ssl_trusted_certificate /usr/local/nginx/ssl_cert/trustchain.crt;      

In order to be more reliable, you can also manually update the file content, set Nginx directly from the file to obtain OCSP response without having to pull from the service provider.

# 启用OCSP staplingon;ssl_stapling_file /usr/local/nginx/oscp/stapling_file.ocsp;            # 启用OCSP响应验证,OCSP信息响应适用的证书   on;  ssl_trusted_certificate /usr/local/nginx/ssl_cert/trustchain.crt;      
reasonable configuration of 2.4 TLS protocol

The first thing to do is to specify the version of the TLS protocol, and the unsafe SSL2 and SSL3 are discarded.

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Second, it is recommended to enable Ssl_prefer_server_ciphers, which tells Nginx to enable the server algorithm first in the TLS handshake, and the server chooses the adaptation algorithm instead of the client:

on

Then, choose the optimal cipher suite and order of precedence, specifically refer to Mozilla's Https://wiki.mozilla.org/Security/Server_Side_TLS. preference is given to algorithms that support forward encryption and are prioritized in terms of performance :

"ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-ECDSA-AES128-GCM-SHA256         ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES256-GCM-SHA384 ECDHE-RSA-AES256-GCM-SHA384 DHE-RSA-AES128-GCM-SHA256         DHE-RSA-AES256-GCM-SHA384 ECDHE-ECDSA-AES128-SHA256 ECDHE-RSA-AES128-SHA256 ECDHE-ECDSA-AES128-SHA         ECDHE-RSA-AES256-SHA384 ECDHE-RSA-AES128-SHA ECDHE-ECDSA-AES256-SHA384 ECDHE-ECDSA-AES256-SHA ECDHE-RSA-AES256-SHA         DHE-RSA-AES128-SHA256 DHE-RSA-AES128-SHA DHE-RSA-AES256-SHA256 DHE-RSA-AES256-SHA ECDHE-ECDSA-DES-CBC3-SHA         ECDHE-RSA-DES-CBC3-SHA EDH-RSA-DES-CBC3-SHA AES128-GCM-SHA256 AES256-GCM-SHA384 AES128-SHA256 AES256-SHA256         AES128-SHA AES256-SHA DES-CBC3-SHA !DSS";

Finally, if there is a need for two-way authentication, you can turn on Nginx client authentication. Nginx will only accept requests that contain valid client certificates. If the request does not contain a certificate or the certificate check fails, NGINX returns a 400 error response.

# 要求客户端身份验证ssl_verify_client on;# 指定客户端证书到根证书的最大证书路径长度3;# 指定允许签发客户端证书的CA证书ssl_client_certificate trustchain.crt;# 完整证书链中需要包含的其他CA证书ssl_trusted_certificate root-ca.crt;# 证书吊销列表ssl_crl revoked-certificates.crl;
2.5 False start for reasonable use

TLS False start means that the client sends the application data (such as an HTTP request) while sending the Changecipherspec finished, and the server returns application data (such as an HTTP response) directly when the TLS handshake is complete. In this way, the sending of the application data is not actually waiting for the handshake to complete, so it is false Start.

To achieve false Start, the server must meet two conditions:
1. The service side must support NPN (Next protocol negotiation, predecessor of ALPN) or ALPN (Application layer protocol negotiation, Application layer protocol negotiation);
2. The server must adopt an algorithm that supports forward encryption.
Add a description of what is forward encryption (perfect forward secrecy). Forward encryption requires that a key can access only the data protected by it, and that the element used to generate the key can be changed at a time, no additional keys are generated, and a key is cracked, which does not affect the security of the other keys.

Rational use of 2.6 sni function

SNI (server Name indicate) allows the client to submit the requested host information when initiating an SSL handshake request (ClientHello phase), allowing the server to switch to the correct domain and return the appropriate certificate. This solves the problem of one IP (virtual machine) deploying multiple domain Name services.
Nginx supports SNI in a way that automatically opens. When a client user is encountered that does not support this feature, Nginx typically returns the server certificate for the default site. For example, the following case, the client does not support SNI, Nginx return serversuning.pem . It is not guaranteed that the certificate can be correctly matched, which will bring unnecessary trouble and distress. Therefore, mobile-side development should require the SNI extension to be enabled.

server {    443 ssl default_server;    ssl_certificate /usr/local/nginx/cert/serversuning.pem;      ssl_certificate_key /usr/local/nginx/cert/suning.key;     ...}server {    443 ssl;    server_name sit1.suning.com;    ssl_certificate /usr/local/nginx/cert/serversuningcom.pem;      ssl_certificate_key /usr/local/nginx/cert/suningcom.key;     ...}server {    443 ssl default_server;    server_name sit1.suning.cn;    ssl_certificate /usr/local/nginx/waf/serversuningcn.pem;      ssl_certificate_key /usr/local/nginx/waf/suningcn.key;     ...}
2.7 Proper use of HTTP 2.0

HTTP 2 The author in the "Detailed analysis of the principle of HTTP 2.0" and "Nginx implementation http/2--principle, practice and data analysis" in detail, here is no longer open. Need to prompt, Nginx in the 1.9.x version began to try to support the HTTP2 protocol, but each version will have bugfix , still need to be careful to open, specifically, can refer to the Nginx version of the update log.

2.8 SSL Hardware accelerator card for reasonable use

The SSL hardware accelerator card device can be used instead of the CPU during the TLS handshake operation. The Cavium accelerator card is recommended, and the Cavium engine can be integrated into the Nginx module to support both physical and virtual machine environments. At the same time, the test effect in the virtual machine environment is better than that of the physical machine. It is recommended to open Nginx asynchronous request Cavium engine mode to improve the usage rate more effectively.
Below is the CPU we are measuring to 20% cases, using the TLS 1.2 protocol, encryption suite with ECDHE-RSA-AES128-SHA256, HTTPS Short connection of various environmental performance data, visible using Cavium, physical machine performance improvement ratio: 325 %, virtual machine performance boost ratio: 588%.

Environment Traffic Type TPS delay (s)
Virtual machines HTTPS 172 0.066
Virtual machine + Cavium accelerator card HTTPS 1012 0.066
Physical machine HTTPS 832 0.066
Physical machine + Cavium Accelerator card HTTPS 2708 0.059

In addition, the use of hardware acceleration is a matter of opinion, it certainly has an effect on performance improvement, but because of the high cost of equipment, it is difficult to scale, for most Internet companies is a luxury. Borrow Facebook's statement about hardware acceleration:
"We found that the current software-based TLS implementation has been running fast enough on a normal CPU to handle a large number of HTTPS requests without the use of specialized cryptographic hardware." We use software that runs on normal hardware to provide all HTTPS services. ”

Finally, we summarize the configuration of the service-side nginx, a configuration template for reference only:

server {Listen 443SSL HTTP2 Default_server;        server_name site1.suning.com; Add_header strict-transport-security"max-age=63072000; Includesubdomains; Preload "; ssl_certificate/usr/Local/NGINX/CERT/SERVERSUNINGCOM.PEM; ssl_certificate_key/usr/Local/nginx/cert/suningcom.key;# Allocate 10MB of Shared memory cache, different worker processes share TLS session informationSsl_session_cache Shared:ssl:Tenm;# Set Session cache expiration time 24hSsl_session_timeout1440m; Ssl_protocols TLSv1 TLSv1.1TLSv1.2SSLv3;          Ssl_prefer_server_ciphers on; Ssl_ciphers ssl_ciphers"ecdhe-ecdsa-chacha20-poly1305 ecdhe-rsa-chacha20-poly1305 ecdhe-ecdsa-aes128-gcm-sha256 ECDHE-RSA-AES128-GCM -sha256 ecdhe-ecdsa-aes256-gcm-sha384 ecdhe-rsa-aes256-gcm-sha384 dhe-rsa-aes128-gcm-sha256 DHE-RSA-AES256-GCM-SHA 384 ecdhe-ecdsa-aes128-sha256 ecdhe-rsa-aes128-sha256 Ecdhe-ecdsa-aes128-sha ecdhe-rsa-aes256-sha384 ECDHE-RSA-AES 128-sha ecdhe-ecdsa-aes256-sha384 Ecdhe-ecdsa-aes256-sha Ecdhe-rsa-aes256-sha dhe-rsa-aes128-sha256 DHE-RSA-AES128  -sha dhe-rsa-aes256-sha256 Dhe-rsa-aes256-sha Ecdhe-ecdsa-des-cbc3-sha Ecdhe-rsa-des-cbc3-sha EDH-RSA-DES-CBC3-SHA aes128-gcm-sha256 aes256-gcm-sha384 aes128-sha256 aes256-sha256 aes128-sha aes256-sha DES-CBC3-SHA! DSS ";        Ssl_session_tickets on; ssl_session_ticket_key/usr/Local/nginx/ssl_cert/session_ticket.key;#设置TLS日志格式Log_format SSL"$time _local $server _name $remote _addr $connection $connnection _ Requests $ssl _protocol$ssl _cipher $ssl _session_id $ssl _session_reused "         ; access_log/usr/Local/nginx/logs/access.LogSsl        Ssl_stapling on; ssl_stapling_file/usr/Local/NGINX/OSCP/STAPLING_FILE.OCSP;          Ssl_stapling_verify on; ssl_trusted_certificate/usr/Local/NGINX/SSL_CERT/TRUSTCHAIN.CRT; root HTML;Index  Index. htmlIndex. htm;        Location/{...} Error_page403/403. html; Location =/403.html {root/usr/local/nginx/waf/403/default; } error_page - 502 503 504/502. html; Location =/502.html {root/usr/local/nginx/waf/403/default; }}
3. Client Performance Optimization

Using HTTPS requests in the app, we recommend that you design the agent Layer SDK for the client. The main purposes of the agent layer are two points: (1) to forward the request to the server with the HTTP 2 protocol, and (2) to invoke the server-side Httpdns interface to obtain accurate address resolution information.

3.1 Mobile-HTTP2 accelerator Agent

For example, Android uses components, OkHttp 3 iOS uses Components NSURLSession , can support the HTTP 2.0 protocol. I have also translated the http&http2.0 client for OkHttp, Android and Java applications. The drill-down to specific code development and usage levels is not expanded in this article. Acceleration can only be achieved if both the client and the server communicate using HTTP 2.0. The data can also prove the effect of HTTP 2.0.

3.2 httpsdns Resolving DNS attack hijacking

DNS hijacking by tampering with the user's resolution point, the user's traffic to the third party, in order to achieve the purpose of vicious profit. In addition, some operators in order to avoid inter-network billing fees, will be in the intranet to do site mirroring, and then through the way of DNS hijacking, so that users directly access the mirror.

When the whole station realizes HTTPS, because of the lack of necessary information such as certificate and private key, can ensure that other people's DNS hijacking users can not achieve illegal purposes, but also can not respond to the user's request. This is a double-edged sword, because ordinary users will only think that your site is not loaded.

Therefore, the whole station HTTPS can only avoid the loss of DNS hijacking, resolving the problem of DNS hijacking also need to find good law. Fundamentally, the reason for the DNS hijacking is that we can't control the local Localdns not be black (after all, the carrier is something you understand), then it is possible to bypass operator resolution? PC-side We certainly can't do, and mobile app we can use Httpsdns solution.
Httpsdns Scenario: Use HTTPS protocol (IP instead of domain name) to the Httpdns cluster (authoritative DNS) 443 port requests, in lieu of the traditional DNS protocol to the DNS server 53 port requests. That is, the use of the HTTPS protocol to the DNS resolution request, the server returns the resolution, that is, the domain name corresponding to the server IP, directly to the IP to initiate the corresponding API service request, instead of using the domain name. Alternative (Httpsdns parsing failed), and then go through the traditional Localdns parsing mode.

The advantages of Httpsdns's program are:
1.prevents the Localdns hijacking problem
2.average access latency decreased, the domain name resolution time is omitted because subsequent requests are accessed directly via IP. And some algorithms can be used to calculate the optimal performance of the server-side IP (minimum ping delay), slow existence of the client local address library;
3.user connection failure rate decreased
Currently providing HTTPDNS service-side capabilities of the vendors have a network, dnspod, etc., basically in the form of a contract interface for the client to call, return the results of the resolution. Like what:

The implementation of the client is not easy to imagine, with IP replacement domain access, to consider a lot of problems, such as:
1.Certificate verification problem in IP direct connection under HTTPS scenario
2.httpdns problem in Agent scenario
3.The issue of cookies when IP access
Here we are no longer open, interested can refer to:
"Android uses okhttp support Httpdns" http://blog.csdn.net/sbsujjbcy/article/details/50532797
"Android okhttp Best Practices for Httpdns (non-interceptors)" http://blog.csdn.net/sbsujjbcy/article/details/51612832
Cnsre/httpdnslib Https://github.com/CNSRE/HTTPDNSLib

Above, the performance optimization for the TLS layer has been completed. But is this the end of it? Does the performance boost stop there? Of course not. On the one hand, we should also pay attention to the maximum limit to improve the performance of TCP layer, to cooperate with TLS optimization. Including initial congestion window tuning, prevent idle slow start, keep-alive, etc.;
On the other hand, focus on the updated technical achievements and dynamics, such as the pursuit of 0RTT loss of TLS 1.3, QUIC protocol, etc.

E-commerce website HTTPS Practice Road (iii)-Performance optimization Chapter

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.