python抓取google結果

來源:互聯網
上載者:User
Python多線程抓取Google搜尋連結網頁
1)urllib2+BeautifulSoup抓取Goolge搜尋連結近期,參與的項目需要對Google搜尋結果進行處理,之前學習了Python處理網頁相關的工具。實際應...

1)urllib2+BeautifulSoup抓取Goolge搜尋連結

近期,參與的項目需要對Google搜尋結果進行處理,之前學習了Python處理網頁相關的工具。實際應用中,使用了urllib2和beautifulsoup來進行網頁的抓取,但是在抓取google搜尋結果的時候,發現如果是直接對google搜尋結果頁面的原始碼進行處理,會得到很多“髒”連結。

看為搜尋“titanic  james”的結果:

圖中紅色標記的是不需要的,藍色標記的是需要抓取處理的。

這種“髒連結”當然可以通過規則過濾的方法來過濾掉,但是這樣程式的複雜度就高了。正當自己愁眉苦臉的正在寫過濾規則時。同學提醒說google應該提供相關的api,才恍然大明白。

(2)Google Web Search API+多線程

文檔中給出使用Python進行搜尋的例子:

12345678910111213141516171819202122232425262728293031 import
simplejson     # The request also includes the userip parameter which provides the end 
# user's IP address. Doing so will help distinguish this legitimate 
# server-side traffic from traffic which doesn't come from an end-user. 
url
= ('https://ajax.googleapis.com/ajax/services/search/web'       '?v=1.0&q=Paris%20Hilton&userip=USERS-IP-ADDRESS')
    request
= urllib2.Request(
    url,
None, {'Referer':
/*
Enter the URL of your site here */})
response
= urllib2.urlopen(request)
    # Process the JSON string. 
results
= simplejson.load(response)
# now have some fun with the results...
  import
simplejson   # The request also includes the userip parameter which provides the end# user's IP address. Doing so will help distinguish this legitimate# server-side traffic from traffic which doesn't come from an end-user.url
= ('https://ajax.googleapis.com/ajax/services/search/web'       '?v=1.0&q=Paris%20Hilton&userip=USERS-IP-ADDRESS')   request
= urllib2.Request(    url,
None, {'Referer':
/*
Enter the URL of your site here */})response
= urllib2.urlopen(request)   # Process the JSON string.results
= simplejson.load(response)# now have some fun with the results..

實際應用中可能需要抓取google的很多網頁,所以還需要使用多線程來分擔抓取任務。使用google web search api的參考詳細介紹,請看此處(這裡介紹了Standard URL Arguments)。另外要特別注意,url中參數rsz必須是8(包括8)以下的值,若大於8,會報錯的!

(3)代碼實現

代碼實現還存在問題,但是能夠運行,魯棒性差,還需要進行改進,希望各路大神指出錯誤(初學Python),不勝感激。

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879 #-*-coding:utf-8-*- 
import
urllib2,urllib import
simplejson import
os, time,threading    import
common, html_filter #input the keywords 
keywords
= raw_input('Enter the keywords: ')                                 
   #define rnum_perpage, pages 
rnum_perpage=8pages=8                          #定義線程函數 
def
thread_scratch(url, rnum_perpage, page):  url_set
= []   try:
   request
= urllib2.Request(url,
None, {'Referer':
'http://www.sina.com'})
   response
= urllib2.urlopen(request)
   # Process the JSON string. 
   results
= simplejson.load(response)
   info
= results['responseData']['results']
 except
Exception,e:    print
'error occured'   print
e  else:
   for
minfo in
info:
      url_set.append(minfo['url'])
      print
minfo['url']
  #處理連結 
 i
= 0 for
u in
url_set:
   try:
     request_url
= urllib2.Request(u,
None, {'Referer':
'http://www.sina.com'})
     request_url.add_header(
     'User-agent',
     'CSC'     )
     response_data
= urllib2.urlopen(request_url).read()
     #過濾檔案 
     #content_data = html_filter.filter_tags(response_data) 
     #寫入檔案 
     filenum
= i+page
     filename
= dir_name+'/related_html_'+str(filenum)
     print
'  write start: related_html_'+str(filenum)
     f
= open(filename,
'w+',
-1)
     f.write(response_data)
     #print content_data 
     f.close()
     print
'  write down: related_html_'+str(filenum)
   except
Exception, e:      print
'error occured 2'     print
e    i
= i+1 return   #建立檔案夾 
dir_name
= 'related_html_'+urllib.quote(keywords)
if
os.path.exists(dir_name):    print
'exists  file'   common.delete_dir_or_file(dir_name)
os.makedirs(dir_name)
   #抓取網頁 
print
'start to scratch web pages:'for
x in
range
(pages):   print
"page:%s"%(x+1)
  page
= x
* rnum_perpage   url
= ('https://ajax.googleapis.com/ajax/services/search/web'                  '?v=1.0&q=%s&rsz=%s&start=%s')
% (urllib.quote(keywords), rnum_perpage,page)
  print
url   t
= threading.Thread(target=thread_scratch, args=(url,rnum_perpage,
page))
  t.start()
#主線程等待子線程抓取完 
main_thread
= threading.currentThread()
for
t in
threading.
enumerate():
  if
t is
main_thread:
    continue  t.join()

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.