Since the Python version of the personal installation is 2.7, the related code is also the version. Crawl all information on a Web page
Using the URLLIB2 packet to crawl the information of the Web page, first introduce the Urlopen function of the next URLLIB2 package.
Urlopen: Store All the information in a Web page in an object, and we can read the object to get the information. For example, we use it to get Baidu homepage information below.
Import urllib2
f = urllib2.urlopen (' http://www.baidu.com ')
f.read (100)
Through the above code we read the first 100 characters of Baidu homepage:
' <! DOCTYPE html><!--STATUS ok-->
Sometimes there may be coding problems that cause the open is garbled, just modify the encoding format can be:
F.read. Decode (' Utf-8 ')
In this way we can get the chain home a second-hand home information:
Import urllib2
url = ' HTTP://SZ.LIANJIA.COM/ERSHOUFANG/PG '
res = urllib2.urlopen (URL)
content = Res.read ( ). Decode (' Utf-8 ')
So there is content in the Web page information. Get listings Information
We've got an entire Web page, and then we need to get the useful information we need on the page, and our goal is to get the listings by using regular expressions. The knowledge about regular expressions can refer to a Netizen's blog http://www.cnblogs.com/huxi/archive/2010/07/04/1771073.html
First, we look at the following page information:
We are concerned with this information similar to the following
data-el= "Region" > Vanke v Yuan </a> | 3 Room 2 Hall | 104.58 sq. ft. | South | Hardcover </div><
Import urllib2
import re
url = ' http://sz.lianjia.com/ershoufang/pg/'
res = urllib2.urlopen (URL)
Content=res.read (). Decode (' utf-8 ') result
= Re.findall (R ' >.{ 1,100}?</div></div><div class= "Flood" > ", Content) for I in result
:
print (I[0:-31].decode ( ' Utf-8 '))
The results of the operation are shown below:
This is to get the information I want, but there is a symbol in the middle of the message we do not want, then we need to remove the symbol (see this method is more cumbersome, low efficiency).
Here I use the character substitution operation to replace this extra character with the empty character.
The code is:
Import urllib2
import re
url = ' http://sz.lianjia.com/ershoufang/pg/'
res = urllib2.urlopen (URL)
Content=res.read (). Decode (' utf-8 ') result
= Re.findall (R ' >.{ 1,100}?</div></div><div class= "Flood" > ', Content ' for I-in
-result:
print (i[0:-31). Replace (' </a> ', '). Decode (' Utf-8 ')
Although the above method has helped us to obtain the listing information, but the method is still somewhat cumbersome, and the efficiency is not high
We use the above methods to crawl the chain home second-hand Housing 100 page listing information, code changes as follows:
Import urllib2
Import time
import re
print (Time.clock ())
url = ' Http://sz.lianjia.com/ershoufang /pg ' for
x in range (a):
finalurl = URL + str (x) + '/'
res = Urllib2.urlopen (finalurl)
CONTENT=RES.R EAD (). Decode (' utf-8 ') result
= Re.findall (R ' >.{ 1,100}?</div></div><div class= "Flood" > ', Content ' for I-in
-result:
print (i[0:-31). Replace (' </a> ', '). Decode (' Utf-8 '))
print (Time.clock ())
The
is mainly to test the running time, the test result is about 350s (of course, mainly by the impact of network speed, and the code itself to consume most of the time on the Urlopen), next, in the next, will use the BeautifulSoup library to achieve the availability of listings.