Recently, the boss let the data from the site, manual too slow, the Internet to find a bit of Python, with a script operation.
1 ImportOS2 ImportRe3 4 Importxlrd5 ImportRequests6 ImportXLWT7 fromBs4ImportBeautifulSoup8 fromXlutils.copyImportCopy9 fromXlwtImport*Ten One A defread_excel (path): - #Open File -Workbook =Xlrd.open_workbook (path) the #Get all sheet - - #get sheet content based on sheet index or name -Sheet1 = Workbook.sheet_by_index (0)#Sheet Index starting from 0 + - #sheet name, number of rows, number of columns +i =0 A forSheet1_valuesinchsheet1._cell_values: at -str =Sheet1_values[0] -Str.replace ('\ '',"') - Print(str,i) -Response =get_responsehtml (str) -Soup =Get_beautifulsoup (response) inPATTERN1 ='^https://ews-aln-core.cisco.com/applmgmt/view-appl/+[0-9]*$' -PATTERN2 ='^https://ews-aln-core.cisco.com/applmgmt/view-endpoint/+[0-9]*$' toPATTERN3 ='^https://ews-aln-core.cisco.com/applmgmt/view-appl/by-name/' + ifPattern_match (STR,PATTERN1)orPattern_match (STR,PATTERN3): -Priority = Soup.find ("Table", class_="Main_table_layout"). Find ("TR", class_="Centered Sub_section_header"). Find_next ("TR", thealign="Center"). Find_all ( * "TD") $ elifPattern_match (str,pattern2):Panax NotoginsengPriority = Soup.find ("Table", class_="Main_table_layout"). Find ("TR", -class_="Centered"). Find_next ( the "TR", +align="Center"). Find_all ( A "TD") the Else: + Print("No pattern") - Try: $Prioritynumble ='P'+get_last_td (priority) $ - exceptException: - Print("not found"+str) thePrioritynumble ='P'+get_last_td (priority) -Write_excel (path,i,1, prioritynumble)Wuyii = i + 1 the defWrite_excel (path,row,col,value): -OLDWB =Xlrd.open_workbook (path) WuWB =copy (OLDWB) -WS =wb.get_sheet (0) About ws.write (Row,col,value) $ wb.save (path) - defget_last_td (Result): - forIdxinchRange (len (result)): -Returnresult =Result[idx].contents[0] A returnReturnresult + defGet_beautifulsoup (Request): theSoup = beautifulsoup (Request,'Html.parser', from_encoding='Utf-8', exclude_encodings='Utf-8') - returnSoup $ defget_responsehtml (URL): theheaders = { the 'user-agent':'user-agent:mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) applewebkit/537.36 (khtml, like Gecko) chrome/56.0.2924.87 safari/537.36'} theResponse = Requests.get (URL, auth= (UserName, PassWord), headers=headers). Content the returnResponse - defPattern_match (str,pattern,flags =0): inPattern =re.compile (pattern) the returnRe.match (pattern,str,flags) the About if __name__=='__main__': theUserName ='*'; thePassWord ='*' thePath = R'*' +Read_excel (PATH)
There's a lot of holes in it.
1. Just start the xlsx format file, save cannot open, change Excel format to XLS is correct.
2.header Online Search, this will not be considered as a web crawler error: Http.client.RemoteDisconnected:Remote end closed connection without response.
3.copy parameter to workbook instead of XLS filename, otherwise error: Attributeerror: ' str ' object has no attribute ' Datemode '.
4. Find a good blog: In Python, add an XLS file that writes data to an existing Excel, open an Excel file, write new data
5. Just beginning to save in the new file, using a new path, found not to be OK, because in the for loop each time is copied from the source Excel, so the actual result is inserted only one row.
6. Syntax for regular expressions: regular Expressions- syntax and python regular Expressions
6.python Beautiful Soup Usage, very full documentation: Beautiful Soup 4.2.0 Documentation
7. A demo:python3 crawler crawling fiction (vii): using beautiful soup to crawl a novel
8. Never wrote Python, the first time, it took half a day, there are many things to improve.
Python reads and writes Excel