Most of our site links are live links are operational configuration, and sometimes operations will link configuration errors to make access errors, and in some cases due to program bugs caused access errors, so the master wrote a monitoring script, using Python to crawl the master set of links and access to statistics access error links, Because of the hundreds of links, so the use of multi-threaded, because HTTP access is IO-intensive, so Python multithreading can be very good to complete the concurrent access.
First, index.py.
The thread pool management thread is used to configure the link that needs to be verified, and then crawl all links in the configured link page, as well as the possibility that many URL links in the sub-page are duplicated with the master, and can also be removed
The profile uses the Yml file: Is_checkindex check for duplicates with the first page, and if true, the URL to remove and duplicate the first page
Crawl links use requests and regular expressions
Finally is the statistical error link, here also did the filter, only filters the website related URL, like some cooperation website and so on the advertisement link is not to record returns
#Encoding=utf-8 fromQueueImportQueueImportQueue fromTool.hreftoolImporthreftestImportThreadingImport Time fromTool.Log.logToolImportLogtoolindex_url='https://bj.jiehun.com.cn'classHunindex:def __init__(self, URL, Is_checkindex, index_items): Self.url_queue=Queue () self.headers= { 'user-agent':'mozilla/5.0 (Windows; U Windows NT 6.1; En-us; rv:1.9.1.6) gecko/20091201 firefox/3.5.6',} self.thread_stop=False Self.items=hreftest.get_hostsit_href (URL, self.headers)ifIs_checkIndex:self.index_items=Index_items Self.start_time=time.time () self.stop_time=None self.error_list=[] Self.url=URL Self.is_checkindex=Is_checkindex Self.execute_items= [] defGet_index_urlitem (self): Logtool.info ('start Get page URL, current page: {URL}'. Format (url=self.url))ifself.is_checkIndex:self.execute_items.extend ([item forIteminchSelf.itemsifItem not inchSelf.index_items]) Else: Self.execute_items.extend (self.items) forIteminchlist (set (Self.execute_items)):ifHreftest.check_url (item[0]): Item1= (Hreftest.change_url (item[0], index_url), item[1]) Self.url_queue.put (ITEM1, Block=true, timeout=5) Logtool.info ('page URL gets completed with a total of {sum_url} URLs'. Format (sum_url=self.url_queue.qsize ())) def_parse_url (self, item):Try: Logtool.info ('Check URL:%s, Title:%s'% (str (item[0]), str (item[1] )) response=Hreftest.get (item[0], self.headers)exceptException as E:logtool.error ('request failed, url=%s, error_messge:%s'%(str (item[0]), e))Print('error-%s, message-%s'%(Item[0], E)) error=list (item)Print('Error', List (item)) Error.append ('error_message=%s'%e) self.error_list.append (Error)Else: if notResponse.status_code = = 200: Logtool.error ('request failed, url=%s, error_code:%s'%(str (item[0]), Response.status_code))Print('error-%s error_code:%s'%(Item[0], Response.status_code)) error= List (item). Append ('error_message=%s'%Response.status_code)Print('Error', item)Print('----', error) self.error_list.append (error)Else: Logtool.info ('request succeeded, test passed, url=%s,title=%s'% (str (item[0]), str (item[1]))) Print('success-%s'%item[0])Pass defParse_url (self): while notSelf.thread_stop:Try: Item= Self.url_queue.get (timeout=5) exceptqueue. Empty:self.thread_stop=True BreakSelf._parse_url (item) Self.url_queue.task_done ()defRun (self): thread_list=[] T_url= Threading. Thread (target=self.get_index_urlitem) thread_list.append (t_url) forIinchRange (35): T_parse= Threading. Thread (target=self.parse_url) thread_list.append (t_parse) forTinchThread_list:t.setdaemon (True) T.start () forQinch[Self.url_queue]: Q.join () Self.stop_time=time.time ()if __name__=='__main__': Page_url='https://bj.jiehun.com.cn/hunshasheying/storelists?source=BJIndexFL_1_1&ordersrc=BJIndexFL_1_1'Hun=Hunindex (Page_url, True) hun.run () Sum_time= Int (Hun.stop_time-hun.start_time)Print(Sum_time)
Crawl Links:
defget_hostsit_href (CLS, URL, headers):" "get URL page all href a tags:p aram URL: url to crawl:p Aram headers: request Header: return: All URLs and titles that match" " Try: Response= Requests.request ('GET', Url=url, headers=headers)exceptException as E:Print('Error-{url} message-{e}'. Format (Url=url, e=e))Else: #print (Response.text) #pattern = re.compile (' <a.*href= ' (. *?) ". {0}=? "?. * "?>" (. *?) </a> ')Pattern = Re.compile ('href= "(. *?)" {1}. {0,10}?=? "?. * "?> (. +)?</a>') #pattern = re.compile (' <a\b[^>]+\bhref= ' ([^ "]*)" [^>]*> ([\s\s]*?) </a> ')Items =Re.findall (Pattern, Response.text)returnItems
Relatively simple regular write is not very matching, but has been able to match more than 90% of the link, fully meet the requirements, because this is not very familiar with, still need to learn.
Effect:
Because the link access, even if multithreading also depends on the network speed and other factors, the speed of the home more than 300 links time 20 seconds, bad to 40 seconds, as a daily test or can, if the detection of our site master all the main channel page, the basic in 10 minutes can be completed, But if the completion of the sub-city station all monitoring that is a little chicken, wrote a level link detection and then detect two levels of links, with more than half an hour, more than 40,000 links, a bit too much. , and some links may only be different ID, and there is no great practical significance (such as the URL of the product class is only different ID, then this kind of detection in the more than 40,000 may have a lot of duplicate similar links), follow-up see how to optimize.
Verify link correctness by using thread pool multi-threaded crawl Links