HTTP Knowledge Point Frontend

Source: Internet
Author: User
Tags response code set time

HTTP knowledge points that the front-end must understand

For the HTTP message format is not much detail, as a front-end development, we need to know the back and forth of the request and response between the request header and the return header between the relationship and the meaning of each field, static file resources at the time of loading we observe the performance optimization point, and some daily request error How to solve the pit, What's more important is how to handle the interview calmly.

1. Simple cross-domain solution

  

When a local request is made to a different domain, the browser makes a OriginRequest header verification, if not set, under different domain names or local requests, the browser will send a request to the server, the server will also send the corresponding value of the client, but the browser takes into account the security policy, will make a error about the header information, at this time, for the backend, need to response in the return header to add ‘Access-Control-Allow-Origin‘: ‘*‘To tell the browser that I allow you to make a cross-domain request without an error and return the value to the requestor so that you can safely get the data. This also causes any domain name to be sent over the request, allowing cross-domain scenarios where the ‘Access-Control-Allow-Origin‘: ‘此处设置指定的域名‘
2. Complex cross-Domain solutions
在简单的跨域请求中1.请求方法是以下三种方法之一:HEADGETPOST2.HTTP的头信息不超出以下几种字段:AcceptAccept-LanguageContent-LanguageLast-Event-IDContent-Type:只限于三个值application/x-www-form-urlencoded、multipart/form-data、text/plain

If the above limit is not exceeded, the backend will only need to provide a cross-domain origin to allow, if the request method exceeds the above three, need to add ' access-control-allow-methods ': ' PUT ', the same browser for security, Forbid other request methods to make a cross-domain request in a method that is not set up to allow

Similarly, complex requests are also packets that require a backend setting to allow some cross-domain requests, such as those that typically occur:

    

    • Add a custom header
fetch(‘http://127.0.0.1:8887‘, {    method: ‘PUT‘,    headers: {      ‘x-header-f‘: ‘1234‘,    }  })

报错信息Failed to load http://http:www.pilishou.com:Request header field X-header-f are not allowed by Access-control-allow-header s in preflight response.

解决方案Requires a service side plus allows those custom headers to perform a cross-domain parody of ' access-control-allow-headers ': ' X-header-f ',

    • Add a request type that does not include the above three
fetch(‘http://127.0.0.1:8887‘, {    method: ‘PUT‘,    headers: {      ‘x-header-f‘: ‘1234‘,      ‘content-type‘: ‘json‘ } })

报错信息Failed to load http://http:www.pilishou.com:Request header field Content-type are not allowed by Access-control-allow-head ERs in preflight response.

解决方案Requires a service side plus allows those custom headers to perform a cross-domain parody of ' access-control-allow-headers ': ' Content-type ' This request header message

Complex cross-domain requests, including pre-request scenarios

In the case of non-homologous requests, the browser will first make an option request, the so-called pre-request, is a tentative request, to the server side of the request, found that the interface is set to allow the corresponding request method or request header, will send a real request, will send a total of two requests to the background, take their desired data , the OPTION server will also return the data on request, but the browser layer is closed and the corresponding error will be reported if the corresponding cross-domain setting is not detected.

Reduce the number of pre-request authentication in the local union, each time a non-simple request is sent a pre-request, pre-request is also a time and resources to take the operation, like a real-name quality certification, in a certain period of time without real-name quality recognition, the principle of the same, if the current request for the first authentication, In a certain period of time do not need to do a two authentication, but need to do a certification time control, through the ' access-control-max-age ': ' 860000 ', return, once again in this time, send a real request, not through the pre-request Optionmethod to perform a detection authentication.

Cache-control usage scenarios and performance optimizations

cache-controlThis thing is a cache flag for a static resource pulled from the server.

For cache-control several modes that can be set, the front-end engineer also needs to know which modes

    1. Max-age = 10000 (in seconds, set according to demand)
    2. No-cache (authenticate to the server each time a request is made and need to work with etag,last-modified)
    3. No-store (each request needs to pull new resources to the server)
    4. Privite (Private, non-proxy cache)
    5. Public (common, if local invalidation, proxy cache exists can be notified from the proxy cache with expired resources)
Max-age

When the resources are loaded, the browser will automatically store to us in memory, but the browsing internal failure time is controlled by the internal mechanism, when using Nginx to do static resources, when the refresh, browsing will be sent to the server again whether expired authentication, in the resource cache time to determine the circumstances, After you specify the strong cache through Max-age, the browser loads the same resource file again, just pull the memory or disk to reuse

To achieve the above functions need to return resources on the service side of the returned resource set ' Cache-control ': ' Max-age= time (in seconds) ', when the page is refreshed again, within the set time, refresh the page, Cache resources in memory will be pulled back without clearing the cache.

No-cache

no-cacheLiteral word meaning is not cached meaning, but it is easy to confuse people, but the essence of the letter, meaning that each time a request for static resources to the server to do an expired authentication, usually, the expiration of serious need to cooperate with (etag和Last-Modified) a comparison, the topic followed by the discussion, if the validation has not expired, A 304 status code will be sent to notify the cache of the browser to be browsed into the re-

No-store

no-storeOn behalf of each resource request pull up the resource server's latest resources, even if you set Max-age, No-store, No-store priority is the highest, at this time max-age does not take effect, the same will pull the latest resources from the service side

Private vs public

In the case of resource requests, there are cases where the request is not sent directly to the original resource server, which passes through some proxy servers, such as Cdn,nginx and other proxy servers, and if written to public, all proxy servers are also cached. For example, S-maxage is in the proxy cache in effect, if the local max-age expired, then through the proxy cache, the proxy cache does not expire, will tell the browser or can be used locally expired cache, but for private intermediate proxy server will not take effect, Perform a validation directly from the browser side to the original server.

Cache Validation Last-modified and Etag

Last-modified

The last modification time, generally in the server, the modification of the file will have a modified time record, in Nginx do static resources, Nginx will return a last-modified last modified time, when the browser again request, will be the corresponding if-modified-since and if-unmodified-since in the request header again sent to the server, tell the server last time you gave me file changes, but last-modified only in seconds, in some cases, is not accurate

Etag

is a more rigorous verification, mainly through a number of data signatures, each data has its own unique signature, once the data modification, will generate another unique signature, the most typical practice is to do a hash of the content, when the browser side to the server to request the If-match or If-non-match, when the server is received after the service side will be compared to the signature and browser sent over the signature, which is also make up for the last-modified can only in seconds, in some cases, is not accurate situation

Last-modified and ETag with No-cache

Usually only in the case of Cache-control in No-cache, the browser will also be a cache of resources, and the server will be a certification expires, once the server returns 304 status code, the browser can reuse the cache, the server will be re-request data.

Policy mechanism of cookies

The cookie is a service-side and the end of the same thing as identity card authentication, once the backend in the return header set a cookie, the response will appear in the cookie data set, and there will be a browser Application/cookie, The cookie information from the current domain name is taken in the request's header each time it is sent

Health value pair Mode setting

' Set-cookie ': ' id=1 ',

Set Expiration Time

Normally, when the browser shuts down when the expiration time is not set, the cookie expires, and we can set a cookie expiration time by max-age or expire.

Non-accessible cookies

If you do not set HttpOnly, you can read through Document.cookie, under different circumstances, consider security, can be httponly settings, in the Document.cookie is not available.

Secure Cookie under HTTPS

If secure is set only under HTTPS service, the field will be written to Application/cookie, although there is a field for sending cookies in response, but when the browser recognizes that it is not an HTTPS service, a slightly

Two-level domain name and two-level domain name of the cookie transmission

Say an example:

All of the company's internal systems have to take a landing system. May also say SSO single-point landing, if the landing is a sso.pilishou.com two-level domain name, and your own development when the environment is the localhost:9999 port, when the login is successful, at this time the cookie is located under the sso.pilishou.com domain name, under the local 127 .0.0.1 send the request, simply can't get the sso.pilishou.com cookie information below , cookies are not taken from the request header and can be 127.0.0.1 mapped toweb.pilishou.com

But the problem came, sso and web are all two-level domain name, under web the same can not get sso under cookie , at this time, the solution, in the success of the sso login, need to back up the background of the cookie information through Dioman the set to pilishou.com the main domain

In the web two-level domain name can be obtained sso after the request successfully set the cookie information, in the case of not set HttpOnly, try to use document.cookie can get the information they want cookie , but in the sending, found that request头 there is no cookie信息bring in the request, in which fetch请求 we want to set credentials: ‘include‘ , meaning to allow the request when the cross-domain cookie, you will find that cookie brought into therequest头部

After so many settings, at the time of the tune-up, the backend also needs to match your behavior, requiring the backend engineer to also configure the return header to join ‘Access-Control-Allow-Credentials‘: ‘true‘ , allow cookie的跨域 ,

But the problem comes again, really a lot of problems, no one step, at this time your browser will error, in the setting cross-domain cookie , do not allow settings response header set Origin to * , can only set the specified domain name for a cross-domain imitation, at this time also need the back-end engineers to cooperate with the front *change to you specify current web.pilishou.com .

If you're talking about cookies, you're going to use so many streams to tell a dead interviewer.

Various architectural approaches to HTTP long-connection and performance optimization

In the past there is no packaging tools, or not applied to the packaging tool, a large project will have a bunch of JS, a bunch of CSS, will cause a variety of problems, will lead to the introduction of resources chaos, resource load slow, some time the page rendered, click on the time there is no response, this need to request resources from HTTP, The TCP connection that was created after three handshakes.

Because each browser's execution strategy is different, so I just Chorme say, open developer tools, click network , through Right Name -click, there is one connect id , Chrome can create six concurrent connections at a time, but six concurrent connections will block the subsequent resource requests, If the first six resource files are large, the subsequent resource requests will be blocked, will be a queue waiting for requests, when the page in the network is unstable, HTML,CSS and has been loaded, also rendered, and JS finally wait until the request, but suddenly the speed of the poor, the user click at this time is not any response, Because JS it's not even loaded yet.

In order to verify that the network resources to open the site, the Internet speed 2G模式 , will find that the beginning will only appear 6次connect连接 , but also not all of a sudden, because the creation TCP连接 needs to go through three handshake, which will take time, when 6 connection creation is completed, and then back to the serial way, unless only connectwhen the connection request is completed, the connection resource is not made up, the request in the next queue is reused, and no new TCP is created, but at the TCP end of the shutdown, the browser will either negotiate with the server itself, or set the shutdown time, after which time there is no request. Before a connection is closed, the observation connect id will find that only 6 will appear, the connect id rest of the plenary is reused, and if some of the resources are reused for other sites, a new one will be opened.connect id

解决方案

So now for spa the page, have taken, the resources are merged, to CSS,JS carry out a merger, usually in Vue 4 files are typed,vendor.js, app.js, manifest.js, app.css

Allow the browser to make the full use of a master file on a project one at a time all through TCP连接 parallel download, whether from the performance speed or to solve the user unresponsive solution, then I briefly explain why it is divided into these four files.

1. Vendor.js generally is a node_modules file, do not lightly change, so can be cached by the browser, can be a long-term cache 2. Generally, business code files, business app.js code files, business code for the company is a very frequent iteration, so when users pull resources, Only need to pull app the new resources, app.js also can be divided into module one resource update, 3. manifest.js is a runtime运行时的文件 , regardless of the app.js vendor change, the masfiste file will change, so also make a separate update 4. app.css. is a comprehensive consideration, although if you change some of the small resources, but will also be pulled back, but save the number of requests, for the merger may be based on the project to consider a situation.

Each file is actually updated not by what cache settings, but after each JS or CSS will be followed by a file hash, this hash is a packaging tool for us to do, once the file has changed, will regenerate a hash, the browser when loading resources, found that the corresponding cache file is not found , a re-request is made to the server.

Multiple multiplexing, and the choice of single-use multiplexing

Above we say because of the browser's request speed impact and, TCP connection restrictions, we have taken the above scenario, but each scenario is targeted at different scenarios and architectures, for background management projects, basically the company is a unified engineering, so the project is to adopt a set of all or a few sets, However, the use of the project base files are the same, to upgrade will be based on the specific needs of the project to upgrade, for the company's internal system, the best way is to abandon the performance of the initial load, using the cache for multi-item cache reuse

Usually a Vue project, vue.js vuex.js router.js and some common internal JS files are integrated in the project architecture

As an example,

The company's internal projects generally have three environments, plus your local debugging has four, if you put these files all hit to vendor, will produce as long as the item after the re-send, or switch environment, in this four resource environment can not be a reuse, because domain names are not the same, so the browser to find the cache can not be shared, Often these files in all projects, all environments are non-recurring files, one load, any environment, any project sharing utilizes cache resources

1. We can use CDN to make use of the above mentioned documents 2. You can also put files in a common directory under a domain name and make use of them.

From memory cache and from disk cache

    1. from memory cacheCache pulled from memory
    2. from disk cacheCache pulled from disk

After the resource pull, here is also the chrome for the explanation, the browser after pulling the resources, the resources will be disk and memory cache, and css文件 will be cached on disk, html.js,img等文件 memory and disk cache, when the page is refreshed, in addition to the specific resources in the return header write cache-contorl: no-cacheAlternatively, no-store the resource is pulled directly from the cache, displayed in size, and the from memory cache CSS file is displayed from disk cache , but the no-cache validation does not expire, and 304 is returned for the read cache, only to the original server for a validation.

Meta http-equiv= "Cache-control" content= "No-cache" settings for the preparation of

At this point the front end and back end of the classmate has been linked, sent to the test environment, let the test comrades to test.

测试说: One of the words in your page is wrong, change it, re-contract me to test again, testing off the browser, brush a shake

前端一顿操作后。。。。 This operation is as fierce as a tiger.

前端说: All right, you test it. I sent up, at this time also closed the browser, in the mind to test to test the silent JJ, I first brush a shake.

测试一顿操作后。。。 Open the browser, the address entered into, after the return ...

测试说: You did not change it, how did not effect.

The front end also opens the browser at this time, enter the address, a look, wc what condition. began to doubt the life ... I changed it obviously. It doesn't work. Then again a fierce tiger opened the file to look at, and re-contract

问题总结

The root cause, for an analysis, is due to the cache problem, the browser will be an automatic cache HTML page, but normal refresh, if the use of Nginx to do a static resource, a 304 of the re-service to the server to make a change in the validation of a resource, A 304 cache utilization if no changes are made

When the browser process is closed, the memory resource is purged with the browser closed, and when the browser is opened again, the cache is read from the disk, if not set meta http-equiv="Cache-Control" content="no-cache", when opening the browser again to imitate the question, the HTML page will be the first time the browser to read the disk is from disk cache, then it must be used or the original old resources, this is the root of the problem, so in addition every time from the original server to verify the resources, When you open the browser will not come out of the issue of resources not updated in a timely manner.

REDIRECT Redirected pits

Redirection in response will have a location field redefined, such as the return value/list, which requires us to redirect to/list's page, but in the response code, you can return 302 or 301

301 for permanent Redirection

301 The more common scenario is to use a domain name jump. For example, we visit http://www.baidu.com will jump to https://www.baidu.com, after sending the request, will return 301 status code, and then return to a location, prompting the new address, the browser will take this new address to access.

302 used to do temporary jump

The difference between 302 and 301 is set to 302 if re-access is to pull the resource again from the server and then redirect. 301 is if there is a cache file, the direct read cache file on the response header on the redirect location, if the original service end multiplicity directed location changes, then only through the user clear the cache to re-pull new resources to redirect again, so 301 of the use need rigorous.

CSP Understanding (cotent Security Policy) content safety policies

In order to make our web page more secure

1. Restrict resource acquisition by 2. Unauthorized access to resources

You can set the contents of a global resource by setting DEFAULT-SRC, or you can set the scope of a resource type

1. CONNECT-SRC The resources we connect to 2. Resource 3 for the STYLE-SRC style request. Request resource for SCRIPT-SRC script ... Wait a minute

Can be set by the response header's return setting ' Content-security-policy '

In some cases, some XSS attacks are injected with some code through the inline Scrpit, which can be disabled by setting. You can set ' Content-security-policy ': ' default-src Http:https: ' To disable inline scrpit. After setting, the post is reported Refused to execute inline script because it violates the following Content Security Policy directive: "default-src http: https:". Either the ‘unsafe-inline‘ keyword, a hash (‘sha256-9aPvm9lN9y9aIzoIEagmHYsp/hUxgDFXV185413g/Zc=‘), or a nonce (‘nonce-...‘) is required to enable inline execution. Note also that ‘script-src‘ was not explicitly set, so ‘default-src‘ is used as a fallback . Error.

The introduction of external connections is not allowed:

You can set the ' content-security-policy ': ' default-src \ ' self\ to set, and will report a self Refused to load the script ‘http://static.ymm56.com/common-lib/jquery/3.1.1/jquery.min.js‘ because it violates the following Content Security Policy directive: "default-src error if the external resource is referenced ". Note that ‘script-src‘ was not explicitly set, so ‘default-src‘ is used as a fallback.

If you need to specify an address for the outer chain, you can add the specified address to the DEFAULT-SRC

The rest can be set according to the Content-security-policy ' content security policy document.

HTTP Knowledge Point Frontend

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.