Python crawler 9---to crawl stock data information and save to local file

Source: Internet
Author: User

Technical route: Integrated use of Requests library +BS4 Library +re Library

Objective: To obtain the name and transaction information of all stock in SSE and SSE

Output: Saving to local file

Optional data network There are: Sina stock and Baidu stock, through the view of the source code of the Web, Sina stock data is obtained through JavaScript script, so through the above way can not be resolved

Uh, uh, well, you can say that requests library +BS4 Library +re Library can crawl the site should be---information static exists in the HTML page, non-JS code generation, no protocol restrictions

So finally determined the data source for: East Net + Baidu stock

East Net

Baidu Stock:



Program Structure Design:

1. Get a list of stocks from East net

2. According to the stock list to Baidu shares to get stock information

3. Save the results to a file

Encapsulate functions, writing code

Import requests from BS4 import beautifulsoup import traceback import re def gethtmltext (url,code= ' utf-8 '): try: R = requests.get (Url,timeout =) R.raise_for_status () r.encoding = code return R.text EX Cept:return "" Def Getstocklist (lst,stockurl): HTML = gethtmltext (Stockurl, ' GB2312 ') soup = Beautifulsou
            P (HTML, ' Html.parser ') a = Soup.find_all (' A ') for i in a:try:href = i.attrs[' href '] Lst.append (Re.findall (R "[S][hz]\d{6}", href) [0]) except:continue def getstockinfo (LST,STOCKURL,FPA
        TH): Count = 0 for the Lst:url = stockurl + stock + ". html" html = gethtmltext (URL) Try:if html = = "": Continue infodict = {} soup = BeautifulSoup (HTML, ' Html.parser ') Stockinfo = Soup.find (' div ', attrs={' class ': ' Stock-bets '}) name = Stockinfo.find_al L (attrs={' class ': ' Bets-nAme '}] [0] infodict.update ({' Stock name ': Name.text.split () [0]}) keylist = Stockinfo.find_all (' DT ') ValueList = Stockinfo.find_all (' dd ') for I in Range (len (keylist)): key = Keylist[i].tex T val = valuelist[i].text Infodict[key] = val with open (Fpath, ' a ', encoding = '
            Utf-8 ') as F:f.write (str (infodict) + ' \ n ') Except:traceback.print_exc ()  Continue return "" Def Main (): Stock_list_url = ' http://quote.eastmoney.com/stocklist.html ' Stock_info_url = ' https://gupiao.baidu.com/stock/' output_file = ' c://users//kfc//desktop//baidustockinfo.txt ' slist = [] ge Tstocklist (Slist,stock_list_url) getstockinfo (slist,stock_info_url,output_file) Main ()


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.