1, this section learning experience, experience:
After months of trying to learn Django. That's why you chose this reptile course. After the first chapter of the study, once again realize the power of Python. Have been worrying about the regular, each time to modify several times, in order to match. Seriously affect efficiency. However, the new skill Beautifulsoup4 module was learned in this section. It's so cool to use, it's like writing jquery. Greatly improves the efficiency of the match.
Wu's speech is very easy to understand, but if only listen, then forget. To write the code that has been learned, still do not know how to write. But according to the notes, extrapolate to climb a few stations. If you write again, you can discard your notes. Haha, also is a little bit of experience.
2, the knowledge point in this section summarizes:
First, Crawler introduction
Installing the Requests module: PIP3 Install requests
Installing the BS4 module: PIP3 Install BEAUTIFULSTOUP4
Import Request module: Import requests
Import BS4 module: from BST import BeautifulSoup
ImportRequests fromBstImportBeautifulSoup#get the contents of the page to crawl by gettingret = Requests.get (url='https://www.autohome.com.cn/news/')#sets the encoding for the acquired content (Apparent_encoding gets the encoding of the current content)Ret.encoding =ret.apparent_encoding#parse the acquired content with the BeautifulSoup moduleSoup = ret. BeautifulSoup (Ret.text,'Html.parser')#Html.parser is the way of parsing #Find finds the first object Find_all finds all objects returns a list of objectsdiv = Soup.find (name='Div', id="auto-channel-lazyload-article")#div = soup.find (name= ' div ', attrs=[' id ': "auto-channel-lazyload-article", ' class ': ' Btn '])#Div.text#Div.attrs#div.get (' href ')#get all the Li tag objects find_allList = Div.find_all (name='Li')#list = Div.find_all (Name= ' li ', _class= ' li ')#list = Div.find_all (Name= ' li ', attrs=[' href ': ' xxx.xxx.com ')#Iterate through list objects to print out the contents of the HREF attribute value p label of the content a tag in the H3 tag under each Li forIinchList:h3= I.find (name='H3') A= I.find (name='a')Try:#because the H3 may be empty print will be error here can be used if to judge jump out of the loop here I use try not to let it error Print(H3.text,a.get ('href'))Print(I.find ('P'). Text)except:Pass
Getting started is the end of the knowledge
-----------End --------------
Luffy-python Crawler Training-1th Chapter