Technical route: Integrated use of Requests library +BS4 Library +re Library
Objective: To obtain the name and transaction information of all stock in SSE and SSE
Output: Saving to local file
Optional data network There are: Sina stock and Baidu stock, through the view of the source code of the Web, Sina stock data is obtained through JavaScript script, so through the above way can not be resolved
Uh, uh, well, you can say that requests library +BS4 Library +re Library can crawl the site should be---information static exists in the HTML page, non-JS code generation, no protocol restrictions
So finally determined the data source for: East Net + Baidu stock
East Net
Baidu Stock:
Program Structure Design:
1. Get a list of stocks from East net
2. According to the stock list to Baidu shares to get stock information
3. Save the results to a file
Encapsulate functions, writing code
Import requests from BS4 import beautifulsoup import traceback import re def gethtmltext (url,code= ' utf-8 '): try: R = requests.get (Url,timeout =) R.raise_for_status () r.encoding = code return R.text EX Cept:return "" Def Getstocklist (lst,stockurl): HTML = gethtmltext (Stockurl, ' GB2312 ') soup = Beautifulsou
P (HTML, ' Html.parser ') a = Soup.find_all (' A ') for i in a:try:href = i.attrs[' href '] Lst.append (Re.findall (R "[S][hz]\d{6}", href) [0]) except:continue def getstockinfo (LST,STOCKURL,FPA
TH): Count = 0 for the Lst:url = stockurl + stock + ". html" html = gethtmltext (URL) Try:if html = = "": Continue infodict = {} soup = BeautifulSoup (HTML, ' Html.parser ') Stockinfo = Soup.find (' div ', attrs={' class ': ' Stock-bets '}) name = Stockinfo.find_al L (attrs={' class ': ' Bets-nAme '}] [0] infodict.update ({' Stock name ': Name.text.split () [0]}) keylist = Stockinfo.find_all (' DT ') ValueList = Stockinfo.find_all (' dd ') for I in Range (len (keylist)): key = Keylist[i].tex T val = valuelist[i].text Infodict[key] = val with open (Fpath, ' a ', encoding = '
Utf-8 ') as F:f.write (str (infodict) + ' \ n ') Except:traceback.print_exc () Continue return "" Def Main (): Stock_list_url = ' http://quote.eastmoney.com/stocklist.html ' Stock_info_url = ' https://gupiao.baidu.com/stock/' output_file = ' c://users//kfc//desktop//baidustockinfo.txt ' slist = [] ge Tstocklist (Slist,stock_list_url) getstockinfo (slist,stock_info_url,output_file) Main ()