PHP simulated login crawlers usually capture the content of a page during development. However, sometimes some pages need to be logged in to access the page. The most common is the forum. At this time, we need to simulate login using curl. General idea: first request to extract and save the cookies, and then use the saved cookies to send a request again to obtain the page content. Below we will directly go to the code
'Pythontab', 'Password' => 'pythontab',); // curl initialization $ ch = curl_init (); curl_setopt ($ ch, CURLOPT_URL, $ url ); // set it to post request curl_setopt ($ ch, CURLOPT_POST, true); // set the returned header information to null curl_setopt ($ ch, CURLOPT_HEADER, 0 ); // post data curl_setopt ($ ch, CURLOPT_POSTFIELDS, $ data); // the location of the cookie file stored in curl_setopt ($ ch, CURLOPT_COOKIEJAR, $ cookieFile ); // set the returned data to be stored as a variable instead of directly outputting curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, true); // execute the request $ ret = curl_exec ($ ch ); // close the connection curl_close ($ ch); // Step 2: add the cookie request to the login page $ url = 'HTTP: // www.pythontab.com '; // curl initialization $ ch = curl_init (); curl_setopt ($ ch, CURLOPT_URL, $ url); // set it to post request curl_setopt ($ ch, CURLOPT_POST, true ); // set the returned header information to null curl_setopt ($ ch, CURLOPT_HEADER, 0); // set the location of the cookie information file. Note that the returned header information is different from that obtained in step 2, read curl_setopt ($ ch, CURLOPT_COOKIEFILE, $ cookieFile); // set the data to be stored as a variable instead of directly outputting curl_setopt ($ ch, CURLOPT_RETURNTRANSFER, true ); // execute the request $ ret = curl_exec ($ ch); // close the connection curl_close ($ ch); // print the captured content var_dump ($ ret );
In this way, we can capture the content that requires login to access the page. Note that the above address is just an example, and you need to replace it with the address you want to crawl the page. In this way, we can do a lot of things and never do bad things!