Python擷取網頁Html文本

來源:互聯網
上載者:User

標籤:操作方法   ble   span   com   opera   col   方法   list   request請求   

Python爬蟲基礎  1.擷取網頁文本

      通過urllib2包,根據url擷取網頁的html常值內容並返回

#coding:utf-8import requests, json, time, re, os, sys, timeimport urllib2#設定為utf-8模式reload(sys)sys.setdefaultencoding( "utf-8" )def getHtml(url):    response = urllib2.urlopen(url)    html = response.read()    #可以根據編碼格式進行編碼    #html = unicode(html,‘utf-8‘)    return html url = ‘https://www.cnblogs.com/‘print getHtml(url)

或者

def getHtml(url):    #使用將urllib2.Request()執行個體化,需要訪問的URL地址則作為Request執行個體的參數    request = urllib2.Request(url)    #Request對象作為urlopen()方法的參數,發送給伺服器並接收響應的類檔案對象    response = urllib2.urlopen(request)    #類檔案對象支援檔案對象操作方法    #如read()方法讀取返迴文件對象的全部內容並將其轉換成字串格式並賦值給html    html = response.read()    #可以根據編碼格式進行編碼    #html = unicode(html,‘utf-8‘)    return html     url = ‘https://www.cnblogs.com/‘print getHtml(url)

再添加ua和逾時時間:

def getHtml(url):    #構造ua    ua_header = {"User-Agent":"Mozzila/5.0(compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0;"}    #url連同headers一起構造Request請求,這個請求將附帶IE9.0瀏覽器的User-Agent    request = urllib2.Request(url,headers=ua_header)    #設定逾時時間    response = urllib2.urlopen(request,timeout=60)    html = response.read()    return html    url = ‘https://www.cnblogs.com/‘print getHtml(url)

添加header屬性:

def getHtml(url):    ua = {"User-Agent":"Mozzila/5.0(compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0;"}    request = urllib2.Request(url)    #也可以通過Request.add_header()添加/修改一個特定的header    request.add_header("Connection","keep-alive")     response = urllib2.urlopen(request)    html = response.read()    #查看響應碼    print ‘相應碼為:‘,response.code    #也可以通過Request.get_header()查看header資訊    print "Connection:",request.get_header("Connection")    #或者    print request.get_header(header_name = "Connection")    #print html     return html

添加隨機ua

#coding:utf-8import requests, json, time, re, os, sys, timeimport urllib2import random#設定為utf-8模式reload(sys)sys.setdefaultencoding( "utf-8" )def getHtml(url):    #定義ua池,每次隨機取出一個值    ua_list = ["Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv2.0.1) Gecko/20100101 Firefox/4.0.1","Mozilla/5.0 (Windows NT 6.1; rv2.0.1) Gecko/20100101 Firefox/4.0.1","Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11","Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11"]    user_agent = random.choice(ua_list)    #print user_agent    request = urllib2.Request(url)    request.add_header("Connection","keep-alive")    request.add_header("User-Agent",user_agent)    response = urllib2.urlopen(request,data=None,timeout=60)    html = response.read()    #print ‘響應碼為:‘,response.code    #print ‘URL:‘,response.geturl()    #print ‘Info:‘,response.info()

 

Python擷取網頁Html文本

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.