During this time, I had to build a system, and the time was too busy. I was a little sorry because the blog was not updated in a timely manner. I still want to talk about today's topic: Because I want to capture some search result data on Google, I started to use Python's traditional method of getting data, for example: [Python beautifulsoup multi-thread analysis and capturing web pages], [Python sgmlparser] to get data. It wasn't long before I collected the data, google automatically unblocks the IP address. During this period, I tried to change the IP address. It still didn't work for a while to simulate the browser. It seems that Google's anti-crawling function is too powerful and there is no way of thinking, by mistake, one method on the Internet is to use the APIS or results provided by Google, which saves time and effort, so we can study the last 10 lines.CodeI have written hundreds of lines of code.Python Google APIDo itPython crawls Google search resultsIt's pretty cool.
The Code is as follows:
Import urllib2, urllib
Import simplejson
Seachstr ='Auto'
For X In Range ( 5 ):
Print "Page: % S " % (X + 1 )
Page = x * 4
Url = ( 'Https: // ajax.googleapis.com/ajax/services/search/web'
'? V= 1.0 & Q = % S & Rsz = 8 & START = % S ' ) % (Urllib. Quote (seachstr), page)
Try :
Request = urllib2.request (
URL, none ,{ 'Referer' : 'Http: // www.sina.com' })
Response = urllib2.urlopen (request)
# Process the JSON string.
Results = simplejson. Load (response)
Infoaaa = Results ['Responsedata'] ['Results']
ExceptException, E:
PrintE
Else:
ForMinfoInInfoaaa:
PrintMinfo ['Url']
In this way, you can extract the URL of the search result list based on the search keyword. This api url can accept many parameters. If you want to know more, you can refer to the specific Python Google API documentation.