Python crawls web pages and parses instances, and python crawls

Source: Internet
Author: User

Python crawls web pages and parses instances, and python crawls

This article describes how Python can capture and parse web pages. This article mainly analyzes the Q & A and Baidu homepage. Share it with you for your reference.

The main function code is as follows:

#! /Usr/bin/python # coding = utf-8import sys import reimport urllib2from urllib import urlencodefrom urllib import quoteimport timemaxline = 2000 wenda = re. compile ("href = \" bytes \? Src = (. + ?) \ "") Baidu = re. compile ("<a href = \" http://www.baidu.com/link \? Url =. + \ ". *?> More related questions .*? </A> ") f1 = open (" baidupage.txt "," w ") f2 = open (" wendapage.txt "," w ") for line in sys. stdin: if maxline = 0: break query = line. strip (); time. sleep (1); recall_url = "http://www.so.com/s? & Q = "+ query; response = urllib2.urlopen (recall_url); html = response. read (); f1.write (html) m = wenda. search (html); if m: if m. group (1) = "110": print query + "\ twenda \ t0"; else: print query + "\ twenda \ t1"; else: print query + "\ twenda \ t0"; recall_url = "http://www.baidu.com/s? Wd = "+ query +" & ie = UTF-8 "; response = urllib2.urlopen (recall_url); html = response. read (); f2.write (html) m = baidu. search (html); if m: print query + "\ tbaidu \ t1"; else: print query + "\ tbaidu \ t0"; maxline = maxline-1; f1.close () f2.close ()

I hope this article will help you learn Python programming.


Who has used the re in python to capture webpages? Can I give an example,

This is a very simple script I wrote to capture pages. It is used to get all the link addresses of the specified URL and the title of all links.

=========== Geturls. py ==============================
# Coding: UTF-8
Import urllib
Import urlparse
Import re
Import socket
Import threading

# Define link Regular Expressions
Urlre = re. compile (r "href = [\" ']? ([^> \ "'] + )")
Titlere = re. compile (r "<title> (.*?) </Title> ", re. I)

# Set the timeout time to 10 seconds
Timeout = 10
Socket. setdefatimetimeout (timeout)

# Define the maximum number of threads
Max = 10
# Define the number of current threads
Current = 0

Def gettitle (url ):
Global current
Try:
Content = urllib. urlopen (url). read ()
Except t:
Current-= 1
Return
If titlere. search (content ):
Title = titlere. search (content). group (1)
Try:
Title = title. decode ('gbk'). encode ('utf-8 ')
Except t:
Title = title
Else:
Title = "No title"
Print "% s: % s" % (url, title)
Current-= 1
Return

Def geturls (url ):
Global current, max
Ts = []
Content = urllib. urlopen (url)
# Use set deduplication
Result = set ()
For eachline in content:
If urlre. findall (eachline ):
Temp = urlre. findall (eachline)
For x in temp:
# If it is a site link, add the url
If not x. startswith ("http :"):
X = urlparse. urljoin (url, x)
# Do not record js and css files
If not x. endswith (". js") and not x. endswith (". css "):
Result. add (x)
Threads = []
For url in result:
T ...... remaining full text>

How can I use python to capture webpages and perform some submission operations?

The following Program is an example of Web Page capturing. The MyOpener class is used to simulate the browser client and randomly select a method in case the website considers you as a robot.
The MyFunc function captures the specified url and extracts the href link. The image retrieval is similar to . Other functions should not be difficult, some examples should be found online.

Import re
From urllib import FancyURLopener
From random import choice

User_agents = [
'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv: 1.8.1.11) Gecko/20071127 Firefox/2.0.0.11 ',
'Opera/9.25 (Windows NT 5.1; U; en )',
'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;. net clr 1.1.4322;. net clr 2.0.50727 )',
'Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu )',
'Mozilla/5.0 (X11; U; Linux i686; en-US; rv: 1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12 ',
'Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/1.2.9'
]

Class MyOpener (FancyURLopener, object ):
Version = choice (user_agents)

Def MyFunc (url ):
Myopener = MyOpener ()
S = myopener. open (url). read ()
Ss = s. replace ("\ n ","")
Urls = re. findall (r "<.*? Href = .*? <\/A> ", ss, re. I) # Find the href Link
For I in urls:
Do something.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.