Click on it and click on the "cookie" on the right to see the cookie in the request.
Cookie Analysis
In addition to the two cookies mentioned above, the other request header parameters can refer to the request header of the grab packet when manually dumped. These two cookies are reserved for parameters because the cookie has a life cycle, it needs to be updated when it expires, and there are different cookies for different account logins.
Parametric analysis
Next analyze the parameters and click on "Params" on the right side of "Cookies" to see the parameters. As follows:
Crawl Shareid, from, filelist, send request to network disk
Take the above resource link as an example (at any time may be the crab, but it doesn't matter, the other link structure is the same), we first use the browser manually access, F12 Open the console first analysis of the source, to see where we want the resource information. The control station has the search function, searches "Shareid" directly.
Navigate to 4 Shareid, the first three unrelated to this resource, are other resources to share, and the last to navigate to the last tag block of the HTML file. After double-click to see the formatted JS code, we can find that the information we want are all inside. The following excerpt:
You can see these two lines
Yundata.fileinfo structure as follows, you can copy and paste it into the json.cn, you can see more clearly.
Knowing the position of these three parameters, we can use regular expressions to extract them. The code is as follows:
Crawling to these three parameters, you can call the previous transfer method to dump it.
Enter the group: 125240963 to get the source code Oh!
Why do python crawlers have this? Crawl Baidu Cloud Disk resources! and save to your own cloud disk