[Python] capture the first test page -- capture the data on the Shanghai-Shenzhen stock market transaction list and the python list

Source: Internet
Author: User
Tags python list timedelta

[Python] capture the first test page -- capture the data on the Shanghai-Shenzhen stock market transaction list and the python list

[Python] capture data on Shanghai-Shenzhen stock market transactions

Run python 3.5.0

The files folder is not automatically created. You need to manually create the files folder in the py file directory and then run

 

 

# Coding = utf-8import gzipimport http. cookiejarimport urllib. requestimport urllib. parseimport jsonimport osimport timeimport datetimedef getOpener (head): # deal with the Cookies cj = http. cookiejar. cookieJar () pro = urllib. request. HTTPCookieProcessor (cj) opener = urllib. request. build_opener (pro) header = [] for key, value in head. items (): elem = (key, value) header. append (elem) opener. addheaders = he Ader return openerdef ungzip (data): try: # try to decompress print ('extracting...') data = gzip. decompress (data) print ('decompressed! ') Compression t: print ('uncompress, do not decompress') return datadef writeFile (fname, data): filename = r 'files/'your fname}'.txt 'if OS. path. exists (filename): message = 'file + '+ filename +' already exists. Skipping 'else: message = 'file +' + filename + 'does not exist, create 'f = open (filename, 'w') f. write (data) f. close () print ('file: '+ fname +' is processed. ') ''' Start date for reading and capturing data. If this date does not exist, it will be read from the 10th day before. If so, it will be read from the date in the file and read to the current day. ''' header = {'connection ': 'Keep-alive', 'accept': '*/*', 'Accept-Language ': 'zh-CN, zh; q = 6666', 'user-agent ': 'mozilla/5.0 (Windows NT 6.2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.111 Safari/6666', 'Accept-encoding': 'gzip, deflate ', 'host': '', 'Referer':''} shUrl = 'HTTP: // query.sse.com.cn/infodisplay/showTradePublicFile.do? DateTx = '# 2015-09-28szUrl = ['HTTP: // your = r'startDay.txt' endDay = datetime. datetime. now () if OS. path. exists (startFileName): print ('date configuration file exists, start reading ') f = open (startFileName, 'rt ') S = f. readline () f. close () if s! = '': Print ('will be read from date:' + s + ') timeArray = time. strptime (s, "% Y % m % d") timeStamp = int (time. mktime (timeArray) fromDay = datetime. datetime. utcfromtimestamp (timeStamp) else: print ('date configuration file is empty, will be read from the date before 10') fromDay = endDay-datetime. timedelta (days = 10) else: print ('date configuration file does not exist, will be read from the 10th day before ') fromDay = endDay-datetime. timedelta (days = 10) endDay = endDay + datetime. timedelta (days = 1) while fromDay. strft Ime ("% Y % m % d ")! = EndDay. strftime ("% Y % m % d"): print (fromDay. strftime ("% Y % m % d") ''' cycle the above date to capture the Shanghai Stock Exchange, deep evidence, if the content is not empty, the file is not written into ''' # capture the Data url = shUrl + fromDay. strftime ("% Y-% m-% d") print ('read Shanghai dragon and tigers \ n' + url) header ['host'] = 'query .sse.com.cn 'header ['Referer'] = 'HTTP: // www.sse.com.cn/disclosure/diclosure/public/'try: opener = getOpener (header) op = opener. open (url) data = op. read () data = data. decode () jsonData = Json. loads (data) outData = ''if (jsonData ['filecontents']! = ''): For info in jsonData ['filecontents']: outData = outData + info + '\ n' writeFile (fromDay. strftime ("% Y-% m-% d") + '_ SSE', outData) before T: print (fromDay. strftime ("% Y-% m-% d") + 'skip ') # capture the deep evidence, and the data in the small and medium transaction tables I = 1 for url in szUrl: if (I = 1): name = 'shenzhen' elif (I = 2): name = 'sempty' else: name = 'gem 'url = url + fromDay. strftime ("% y % m % d" cannot parse '.txt 'print ('read' + name + 'Dragon and tiger \ n' + url) header ['host'] = 'www .szse.cn 'header ['Referer'] = 'HTTP: // www.szse.cn' try: opener = getOpener (header) op = opener. open (url) data = op. read () data = ungzip (data) data = data. decode ('gbk') writeFile (fromDay. strftime ("% Y-% m-% d") + '_' + name, data) before T: print (fromDay. strftime ("% Y-% m-% d") + 'skip ') I = I + 1 fromDay = fromDay + datetime. timedelta (days = 1) # The last update date is the current date print ('set latest date') fromDay = fromDay-datetime. timedelta (days = 1) f = open (startFileName, 'w') f. write (fromDay. strftime ("% Y % m % d") f. close () print ('read finished ')

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.