This article will share with you the Python code used to create a crawler to collect novels. it is very simple and practical. although it is still a bit flawed, you can change it together and make common progress in the development tool python3.4.
Operating system: win8
Main function: specify the novel web page to crawl the novel Directory, save it to the local location by chapter, and save the crawled web page to the local configuration file.
Crawled website: http://www.cishuge.com/
Novel name: Liling night travel
Code Source: self-signed
Import urllib. requestimport http. cookiejarimport socketimport timeimport retimeout = 20socket. setdefatimetimeout (timeout) sleep_download_time = 10time. sleep (sleep_download_time) def makeMyOpener (head = {'connection': 'Keep-alive', 'access': 'Text/html, application/xhtml + xml ,*/*', 'Accept-color': 'En-US, en; q = 0.8, zh-Hans-CN; q = 0.5, zh-Hans; q = 100 ', 'user-Agent': 'mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv: 11.0) like Gecko '}): cj = http. cookiejar. cookieJar () opener = urllib. request. build_opener (urllib. request. HTTPCookieProcessor (cj) header = [] for key, value in head. items (): elem = (key, value) header. append (elem) opener. addheaders = header return opener def saveFile (save_path, txts): f_obj = open (save_path, 'W + ') for item in txts: f_obj.write (item +' \ n ') f_obj.close () # get_code_listcode_list = 'http: // www.cishuge.com/read/0/771/'login = makeMyOpener () uop = Login. open (code_list, timeout = 1000) data = uop. read (). decode ('gbk', 'ignore') pattern = re. compile ('
(.*?)', Re. s) items = re. findall (pattern, data) print comment 'URL _ r = open (url_path, 'r') url_arr = url_r.readlines (100000) url_r.close () print (len (url_arr )) url_file = open (url_path, 'A') print ('Get downloaded website') for tmp in items: save_path = tmp [1]. replace (', '{}'.txt' url = code_list + tmp [0] if url + '\ n' in url_arr: continue print ('write log:' + url + '\ n ') url_file.write (url + '\ n') opene = makeMyOpener () op1 = Opene. open (url, timeout = 1000) data = op1.read (). decode ('gbk', 'ignore') opene. close () pattern = re. compile ('(. *?)
', Re. S) txts = re. findall (pattern, data) saveFile (save_path, txts) url_file.close ()
Although the code is still a bit flawed, I 'd like to share it with you and make improvements together.