The crawler simply says it consists of two steps: Get the Web page text, filter the data.
1. Get HTML text.
Python is very handy for getting HTML, and just a few lines of code can do what we need.
The code is as follows:
def gethtml (URL):
page = Urllib.urlopen (URL)
html = Page.read ()
Page.close ()
return HTML
Such a few lines of code believe that you can probably know what it means without annotations.
2, according to the regular expression, etc. to obtain the required content.
Using regular expressions requires careful observation of the structure of the page's information and writing out the correct regular expression.
Python regular expressions are also very simple to use. My last article, "Some uses of Python," introduces a bit of regular usage. A new usage is needed here:
The code is as follows:
def getWeather (HTML):
Reg = ' (. *?). *? (.*?). *? (.*?)'
Weatherlist = Re.compile (reg). FindAll (HTML)
Return weatherlist
Where Reg is a regular expression, HTML is the first step to get the text. The function of FindAll is to find all the strings in the HTML that match the regular matches and put them in the weatherlist. The data output in the weathelist is then enumerated.
There are two places to pay attention to the regular expression Reg.
One is "(. *?)". As long as the content in () is what we are going to get, if there are multiple parentheses, then each result of FindAll contains the contents of these parentheses. There are three brackets, corresponding to the city, the lowest and the highest temperature.
The other is ". *?". Python's regular match is greedy by default, which is to match as many strings as possible by default. If you add a question mark at the end, it represents a non-greedy pattern, which is to match the string as little as possible. Here, because there are multiple cities where information needs to be matched, a non-greedy mode is required, otherwise the matching result is only one and is incorrect.
Python is really handy to use:)