The goal is to crawl all the data on the http://www.gg4493.cn/home page to get the name, time, source, and text of each news article.
Next, break down the target and do it step-by-step.
Step 1: Crawl all the links on the home page and write to the file.
Python is very handy for getting HTML, and just a few lines of code can do what we need.
Copy the code code as follows:
def gethtml (URL):
page = Urllib.urlopen (URL)
html = Page.read ()
Page.close ()
return HTML
We all know that the tag of the HTML link is "a" and that the linked property is "href", which is to get all the TAG=A,ATTRS=HREF values in the HTML.
Looking at the information, I was going to start with htmlparser, and I wrote it out. But it has a problem, that is, when encountering Chinese characters can not be processed.
Copy the code code as follows:
Class parser (Htmlparser.htmlparser):
def handle_starttag (self, Tag, attrs):
if tag = = ' A ':
For attr, value in Attrs:
if attr = = ' href ':
Print value
Later, using the Sgmlparser, it did not have this problem.
Copy the code code as follows:
Class Urlparser (Sgmlparser):
def reset (self):
Sgmlparser.reset (self)
Self.urls = []
def start_a (self,attrs):
href = [V for k,v in attrs if k== ' href ']
If href:
Self.urls.extend (HREF)
Sgmlparser a function that requires overloading a tag, this is where all the links are placed in the URLs of that class.
Copy the code code as follows:
Lparser = Urlparser () #分析器来的
Socket = Urllib.urlopen ("http://www.gg4493.cn/") #打开这个网页
Fout = File (' Urls.txt ', ' W ') #要把链接写到这个文件里
Lparser.feed (Socket.read ()) #分析啦
reg = ' http://www.gg4493.cn/.* ' #这个是用来匹配符合条件的链接, using regular expression matching
Pattern = Re.compile (reg)
For URL in Lparser.urls: #链接都存在urls里
If Pattern.match (URL):
Fout.write (url+ ' \ n ')
Fout.close ()
Python Collection Example 1