python查詢百度seo資訊

來源:互聯網
上載者:User
一個簡單的python查詢百度關鍵詞排名的函數,特點:

1、UA隨機

2、操作簡單方便,直接getRank(關鍵詞,網域名稱)就可以了

3、編碼轉化。編碼方面應該沒啥問題了。

4、結果豐富。不僅有排名,還有搜尋結果的title,URL,快照時間,符合SEO需求


缺點:

單線程,速度慢

#coding=utf-8  import requestsimport BeautifulSoupimport reimport random  def decodeAnyWord(w):    try:        w.decode('utf-8')    except:        w = w.decode('gb2312')    else:        w = w.decode('utf-8')    return w  def createURL(checkWord):   #create baidu URL with search words    checkWord = checkWord.strip()    checkWord = checkWord.replace(' ', '+').replace('\n', '')    baiduURL = 'http://www.baidu.com/s?wd=%s&rn=100' % checkWord    return baiduURL  def getContent(baiduURL):   #get the content of the serp    uaList = ['Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+1.1.4322;+TencentTraveler)',    'Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729)',    'Mozilla/5.0+(Windows+NT+5.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.89+Safari/537.1',    'Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1)',    'Mozilla/5.0+(Windows+NT+6.1;+rv:11.0)+Gecko/20100101+Firefox/11.0',    'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+SV1)',    'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+GTB7.1;+.NET+CLR+2.0.50727)',    'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+KB974489)']    headers = {'User-Agent': random.choice(uaList)}    ipList = ['202.43.188.13:8080',    '80.243.185.168:1177',    '218.108.85.59:81']    proxies = {'http': 'http://%s' % random.choice(ipList)}    r = requests.get(baiduURL, headers = headers, proxies = proxies)    return r.content  def getLastURL(rawurl): #get final URL while there're redirects    r = requests.get(rawurl)    return r.url  def getAtext(atext):    #get the text with  and     pat = re.compile(r'(.*?)')    match = pat.findall(atext)    pureText = match[0].replace('', '').replace('', '')    return pureText  def getCacheDate(t):    #get the date of cache    pat = re.compile(r'.*?(\d{4}-\d{1,2}-\d{1,2})  ')    match = pat.findall(t)    cacheDate = match[0]    return cacheDate  def getRank(checkWord, domain): #main line    checkWord = checkWord.replace('\n', '')    checkWord = decodeAnyWord(checkWord)    baiduURL = createURL(checkWord)    cont = getContent(baiduURL)    soup = BeautifulSoup.BeautifulSoup(cont)    results = soup.findAll('table', {'class': 'result'})    #find all results in this page    for result in results:        checkData = unicode(result.find('span', {'class': 'g'}))        if re.compile(r'^[^/]*%s.*?' %domain).match(checkData): #改正則            nowRank = result['id']  #get the rank if match the domain info              resLink = result.find('h3').a            resURL = resLink['href']            domainURL = getLastURL(resURL)  #get the target URL            resTitle = getAtext(unicode(resLink))   #get the title of the target page              rescache = result.find('span', {'class': 'g'})            cacheDate = getCacheDate(unicode(rescache)) #get the cache date of the target page              res = u'%s, 第%s名, %s, %s, %s' % (checkWord, nowRank, resTitle, cacheDate, domainURL)            return res.encode('gb2312')            break    else:        return '>100'  domain = 'www.douban.com' #set the domain which you want to search.      f = open('r.txt')for w in f.readlines():    print getRank(w, domain)  f.close()
  • 聯繫我們

    該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

    如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.