Write crawler programs in python

Source: Internet
Author: User

Write crawler programs in python

We can use python to implement such a simple crawler function and crawl the code we want locally. The following describes how to use python to implement such a function.

Cause

Late at night, I suddenly wanted to download some ebook to expand the kindle. I realized that python was too simple to learn. I didn't even learn any "decorators" or "multithreading.

Think of the python tutorial of Liao Xuefeng, Which is classic and famous. I just want to find a download of wood and pdf, but the result is not found !! An incomplete CSDN employee cheated me on a credit !! Nima !!

Angry, prepare to write a program to climb Liao Xuefeng's tutorial directly, and then convert html into an e-book.

Process

The process is very interesting. I use a superficial python knowledge to write python programs, crawl python tutorials, and learn python. A little excited ......

Sure enough, python is very convenient, and about 50 lines will be OK. Directly paste the Code:

?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

# Coding: UTF-8

Import urllib

 

Domain = 'HTTP: // www.liaoxuefeng.com '# Liao Xuefeng domain Name

Path = r'c: \ Users \ cyhhao2013 \ Desktop \ temp \ '# path to be saved in html

 

# An html header file

Input = open (r 'C: \ Users \ cyhhao2013 \ Desktop \ 0.html ', 'R ')

Head = input. read ()

 

# Open the main interface of the python tutorial

F = urllib. urlopen ("http://www.liaoxuefeng.com/wiki/001374738125095c955c1e6d8bb493182103fac9270762a000 ")

Home = f. read ()

F. close ()

 

# Replace all spaces and press enter (this makes it easy to get the url)

Geturl = home. replace ("\ n ","")

Geturl = geturl. replace ("","")

 

# Obtain the string containing the url

List = geturl. split (r'em; "> <ahref =" ') [1:]

 

# Obsessive-compulsive disorder. You must add the first page to make it perfect.

List. insert (0, '/wiki/001374738125095c955c1e6d8bb493182103fac9270762a000 "> ')

 

# Start traversing the url List

For li in list:

Url = li. split (R' "> ') [0]

Url = domain + url # patchwork url

Print url

F = urllib. urlopen (url)

Html = f. read ()

 

# Obtain the title to write the file name

Title = html. split ("<title>") [1]

Title = title. split ("-liao Xuefeng's official website </title>") [0]

 

# I need to re-enter the code, otherwise it will be a tragedy to add it to the path.

Title = title. decode ('utf-8'). replace ("/","")

 

# Intercepting text

Html = html. split (R' <! -- Block main --> ') [1]

Html = html. split (R'

Html = html. replace (r 'src = "', 'src ="' + domain)

 

# Add the header and tail to form a complete html

Html = head + html + "</body>

 

# Output file

Output = open (path + "% d" % list. index (li) + title + '.html ', 'w ')

Output. write (html)

Output. close ()

It's just a short time. I use python!

The above is all the content of this article. I hope you will like it.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.