Crawlers take two steps: obtaining webpage text and filtering data.
1. Obtain html text.
Python is very convenient in getting html, and a few lines of code can implement the functions we need.
Copy codeThe Code is as follows: def getHtml (url ):
Page = urllib. urlopen (url)
Html = page. read ()
Page. close ()
Return html
The meaning of these lines of code is probably known without comments.
2. obtain the required content based on regular expressions.
When using a regular expression, you must carefully observe the structure of the webpage information and write the correct regular expression.
Python regular expressions are also simple to use. In my previous article "some usage of Python", I introduced some regular expressions. Here we need a new usage:Copy codeThe Code is as follows: def getWeather (html ):
Reg = '<a title =. *?> (.*?) </A> .*? <Span> (.*?) </Span> .*? <B> (.*?) </B>'
WeatherList = re. compile (reg). findall (html)
Return weatherList
Reg is a regular expression and html is the text obtained in the first step. Findall is used to locate all the strings that match the Regular Expression in html and put them in weatherList. Then enumerate the data output in weatheList.
The regular expression reg here has two points to note.
One is "(. *?)". As long as the content in () is what we will get, if there are multiple parentheses, each findall result will contain the content in these parentheses. There are three parentheses on them, respectively for the city, the lowest temperature and the highest temperature.
The other is ". *?". Python's regular expression matching is greedy by default, that is, matching strings as many as possible by default. If you add a question mark at the end, it indicates the non-Greedy mode, that is, to match as few strings as possible. Because the information of multiple cities needs to be matched, the non-Greedy mode is required. Otherwise, only one matching result is left, which is incorrect.
Python is very convenient to use :)