python小程式 擷取url

來源:互聯網
上載者:User

標籤:

擷取中國比較有權重的網站

# encoding=utf-8import reimport requestsfrom bs4 import BeautifulSoupclass getUrl(object):    def __init__(self,num):        self.totle = num        self.myheader = {‘Host‘: ‘top.chinaz.com‘,                         ‘Connection‘: ‘ keep-alive‘,                    ‘User-Agent‘:‘Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.80 Safari/537.36‘,                    ‘Accept‘:‘*/*‘,‘Referer‘:‘http://www.chinaz.com/‘,                    ‘Accept-Encoding‘:‘gzip, deflate, sdch‘,‘Accept-Language‘:‘zh-CN,zh;q=0.8‘}  # 表頭資訊    def beginer(self):        print ‘get start‘        page = 2        urlliset = []        while page < 1680:            url = ‘http://top.chinaz.com/all/index_‘+str(page)+‘.html‘            r = requests.get(url,headers=self.myheader)            soup = BeautifulSoup(r.text)            list = soup.select(‘.col-gray‘)            site = re.findall(‘<span.*?>(.*?)</span>‘,str(list))            del site[0]            for elem in site:                urlliset.append(elem)            page += 1        self.writeQQ(text = urlliset,file_dir=‘site.text‘,mode=‘w‘)    def writeQQ(self,text, file_dir, mode):        with open(file_dir, mode) as f:            for site in text:                f.write(site)                f.write("\n")spidre = getUrl(44)spidre.beginer()

 

python小程式 擷取url

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.