1. To the small cousin to summarize a list of institutions, want to go to write a small reptile crawl down convenient, so I looked at how to use Python to write, to the basic level of use, no special skills, most of them are Baidu search, the question is not decided to ask Baidu
2. The basic process is:
After crawling a page with request, use BEAUTIFULSOUP4 to process the crawled page,
Then the required stuff is preprocessed and saved to a. txt file on the desktop,
The string in the. txt file is then split,
Finally, save the data to the Excel table
3. Preparation: Need to download the installation requests Library, as well as BEAUTIFULSOUP4 library, there are xlsxwriter libraries, related installation methods online a lot of
4. Page source code for crawling pages:
5. Save the crawled data to a. txt file:
From BS4 import beautifulsoupimport requestsimport osdef get_soup (): R = Requests.get ("HTTP://WWW.EOL.CN/HTML/G/GXMD/BJ /", timeout=30) # Determine the status of the network link, the connection error will produce an exception #print (r.status_code) r.encoding = r.apparent_encoding Soup = Beautifu Lsoup (r.text,features= "Html.parser") return soup; #删除前几个元素def Del_previous_ele (full_list): For I in range (8): Full_list.remove (Full_list[0]) return full_list, #获取学校名称, school number, competent department, school level def Select_school_ele (full_list): School_ list = [] for I in range (full_list.__len__ ())://Here is a list of all schools to be fetched, take out the required data if (i% 7) = = 1 or (i% 7) = = 2 or (i% 7) = = 3 or (i%7) = = 5:school_list.append (full_list[i].string+ "\ T")//Get a complete information about the school and change it in the back. Line if (i%7) = = 5:school_list.append ("\ n") Else:pass return school_list;
#将数据写入文件def createFile (TXT): File = open (' C:\\users\\xxxxxxxxxxxxxxx\\desktop\\school.txt ', ' W ') file.writelines (txt ) File.close (); Print ("Write success") if __name__ = = "__main__": Soup = get_soup () full_list = Del_previous_ele (Soup.find_all (align= "center") ) School_list = Select_school_ele (full_list) createFile (school_list)
6. Processing the school.txt file and saving it to an Excel file
Import Osimport xlsxwriterdef get_file (path,mode_): list = ""; File = open (path,mode_) list = File.read () file.close () return listdef write_excel (list): workbook = Xlsxwriter. Workbook ("c:\\users\\xxxxxxxxxxxxxxxxx\\desktop\\school.xlsx") worksheet = Workbook.add_worksheet ("school") #5个属性为一组 list_5_item = list.split ("\ n") # print (len (list_5_item)) for i in range (Len (list_5_ Item): specific_school = list_5_item[i].split ("\ t") print (len (specific_school) ) For J in Range (Len (specific_school)): Worksheet.write (i, J, Specific_school[j]) if __name__ = = "__main__": list = Get_file (' c:\\users\\ Xxxxxxxxxxxxxxxxxxxxxxxxxxxx\\desktop\\new.txt ', ' R ') Write_excel (list)
7. Complete the results:
Text file:
SCHOOL.XLSX file:
8. Here is the end, because just contact with the crawler, so some places inevitably write bad, do not like to spray
"Now learn to sell" Python crawler