Python Crawler (iii)

Source: Internet
Author: User

Crawl through the entire domain

Six Degrees of space theory: no two strangers will be separated by more than six people, that is to say, up to five people you can know any stranger. Through wikipedia we are able to connect from one person to anyone he wants to be connected to.

1. Get all the connections for an interface

1  fromUrllib.requestImportUrlopen2  fromBs4ImportBeautifulSoup3 4html = Urlopen ("Http://en.wikipedia.org/wiki/Kevin_Bacon")5Bsobj = BeautifulSoup (HTML,'Html.parser')6  forLinkinchBsobj.find_all ("a"):7     if 'href' inchLink.attrs:8         Print(link.attrs['href'])
View Code

2. Get the things that are associated with the current people on Wikipedia

1. Sidebar,footbar,header links and category Pages,talk pages are available for each interface in the page.

2. The connection of the current interface to other interfaces will have the same point

I is included in a div with ID bodycontent

The II URL does not contain a semicolon and begins with/wiki/

1  fromUrllib.requestImportUrlopen2  fromBs4ImportBeautifulSoup3 ImportRe4 5html = Urlopen ("Http://en.wikipedia.org/wiki/Kevin_Bacon")6Bsobj = BeautifulSoup (HTML,"Html.parser")7  forLinkinchBsobj.find ('Div',{"ID":"bodycontent"}). Find_all ("a", Href=re.compile ("^ (/wiki/) ((?!:).) *$")):8     if 'href' inchLink.attrs:9         Print(link.attrs['href'])
View Code

3. Deep Search

It makes no sense to simply find a connection to the current interface from a Wikipedia interface, and if you can start a loop from the current interface, it will make a lot of progress.

1. Need to create a simple method to return the connection of all articles in the current interface

2. Create a Main method, start the search from one interface, and then enter one of the random connections to continue searching on the basis of this new connection until there are no new connections.

 fromUrllib.requestImportUrlopen fromUrllib.errorImportHttperror fromBs4ImportBeautifulSoup fromRandomImportChoiceImportRebasename="http://en.wikipedia.org"defgetlinks (pagename): url= basename +pagenameTry: With Urlopen (URL) as Html:bsobj= BeautifulSoup (HTML,"Html.parser") Links= Bsobj.find ("Div",{"ID":"bodycontent"}). Find_all ("a", Href=re.compile ("^ (/wiki/) ((?!:).) *$"))            return[link.attrs['href'] forLinkinchLinksif 'href' inchLink.attrs]except(Httperror,attributeerror) as E:returnNonedefMain (): Links= Getlinks ("/wiki/kevin_bacon")     whileLen (links) >0:nextpage=Choice (links)Print(nextpage) links=getlinks (nextpage) main ()
View Code

4. Crawl through the domain

1. Crawl through the site first need to start from the main interface of the website

2. You need to save the pages you have visited to avoid repeated access to the same address

 fromUrllib.requestImportUrlopen fromUrllib.errorImportHttperror fromBs4ImportBeautifulSoup fromRandomImportChoiceImportRebasename="http://en.wikipedia.org"visitedpages= Set ()#use set to save an interface address that has been visiteddefvisitelink (pagename): url= basename +pagenameGlobalvisitedpagesTry: With Urlopen (URL) as Html:bsobj= BeautifulSoup (HTML,"Html.parser") Links= Bsobj.find ("Div",{"ID":"bodycontent"}). Find_all ("a", Href=re.compile ("^ (/wiki/) ((?!:).) *$"))         forEachlinkinchLinks:if 'href' inchEachlink.attrs:ifeachlink.attrs['href'] not inchVisitedpages:nextpage= eachlink.attrs['href']                    Print(nextpage) visitedpages.add (nextpage) visitelink (nextpage)except(Httperror,attributeerror) as E:returnNonevisitelink ("")
View Code

5. Collect useful information from the website

1. Did not do anything special, when visiting the webpage to print some H1 and the text content

2. Issues that appear when you print

Unicodeencodeerror: ' GBK ' codec can ' t encode character U ' \xa9 ' in position 24051:illegal multibyte sequence

Workaround: The Source_code.encode (' GB18030 ') will be used before print

Explanation: GB18030 is the parent set of GBK, so it can be compatible with characters GBK cannot encode.

 fromUrllib.requestImportUrlopen fromUrllib.errorImportHttperror fromBs4ImportBeautifulSoup fromRandomImportChoiceImportRebasename="http://en.wikipedia.org"visitedpages= Set ()#use set to save an interface address that has been visiteddefvisitelink (pagename): url= basename +pagenameGlobalvisitedpagesTry: With Urlopen (URL) as Html:bsobj= BeautifulSoup (HTML,"Html.parser")        Try:            Print(BsObj.h1.get_text ())Print(Bsobj.find ("Div",{"ID":"Mw-content-text"}). Find ("P"). Get_text (). Encode ('GB18030'))        exceptAttributeerror as E:Print("Attributeerror") Links= Bsobj.find ("Div",{"ID":"bodycontent"}). Find_all ("a", Href=re.compile ("^ (/wiki/) ((?!:).) *$"))         forEachlinkinchLinks:if 'href' inchEachlink.attrs:ifeachlink.attrs['href'] not inchVisitedpages:nextpage= eachlink.attrs['href']                    Print(nextpage) visitedpages.add (nextpage) visitelink (nextpage)except(Httperror,attributeerror) as E:returnNonevisitelink ("")
View Code

Python Crawler (iii)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.