Python crawler Crawl Python tutorial Chinese version, Save as Word

Source: Internet
Author: User
Tags xpath

See the Chinese version of the Python tutorial, found that is the web version, just recently in the Learning Crawler, like crawling to the local

The first is the content of the Web page

After viewing the Web page source, you can use BeautifulSoup to get the title and content of the document and save it as a doc file.

You need to import the module using the From BS4 import BeautifulSoup

The specific code is as follows:

#content of the URL where the output is located
From BS4 import BeautifulSoupdefintroduce (URL): Res=requests.get (URL) res.encoding='Utf-8'Soup= BeautifulSoup (Res.text,'Html.parser') Title= Soup.select ('H1') [0].text content='\ n'. Join ([P.text.strip () forPinchSoup.select ('. section')]) #Print (title) #Print (content)

The next step is to use the For loop to iterate through all the content to get the link that the directory points to, the resulting link is incomplete, so give it a link to the main station, generate a valid URL, stored in the list address . In contrast, I used XPath to fetch the address of the directory, so I used the from lxml import etree to import the module.

#returns the address of the directorydefGet_url (selector): Sites= Selector.xpath ('//div[@class = "Toctree-wrapper compound"]/ul/li') Address= []     forSiteinchsites:directory="'. Join (Site.xpath ('A/text ()')) New_url= Site.xpath ('A/ @href') Address.append ('http://www.pythondoc.com/pythontutorial3/'+"'. Join (New_url))returnAddress

Then call Get_url () in the main function, traverse all the URLs in it, call the introduce () function, and output the entire text content

defmain (): URL='http://www.pythondoc.com/pythontutorial3/index.html#'HTML=requests.get (URL) html.encoding='Utf-8'selector=etree. HTML (html.text) introduce (URL) url_list=Get_url (selector) forUrlinchurl_list:introduce (URL)if __name__=='__main__': Main ()

The final thing is to write the output to. doc, where the OS module is called and the command to write the file is placed in the introduce () function.

Import OS #将其放置于顶部 with open ('python.doc' a+', encoding= ' Utf-8 ' As F:        f.write (content)

At this point, the completion of the Chinese version of the Python tutorial content, successfully written into the local file, for my regular break the network breakpoint is still very good! can also be placed on the phone to see, hahaha

For BS4 can be installed directly on the command line using the PIP install BS4 command

There are many errors in the installation of lxml under the Windows platform, it is recommended to download the corresponding version of the LXML.WHL file under Windows Python's expansion pack website, and then install it locally using PIP install ***********.

Attention:

Represents the full name of the installation file.

When the installation of the command line must be switched to the directory where the download files, or will error.

Python crawler Crawl Python tutorial Chinese version, Save as Word

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.