Objective
Sometimes it is not easy to find an element on the page by the attribute of the element, and you can crawl the information from the source. Selenium the Page_source method can get to the page source code.
Selenium of the Page_source method is rarely used, small make a recent look API accidentally found this method, so the whim, here combined with the Python re module with regular expression crawled out of all the URL address on the page, can bulk request page URL address, See if there are 404 and other anomalies
First, Page_source
1.selenium Page_source method can be directly returned to the page source code
2. Re-assign the value and print it out
Second, re non-greedy mode
1. You need to import the RE module here
2. Regular matching with re: non-greedy mode
The 3.findall method returns a list collection
4. After matching found some not URL links, you can delete the selected
Third, delete the URL address.
1. Add an If statement to determine that ' http ' in the URL description is the normal URL address
2. Put all the URL addresses in a collection, that's what we want.
Iv. Reference Code
# Coding:utf-8
From selenium import Webdriver
Import re
Driver = Webdriver. Firefox ()
Driver.get ("http://www.cnblogs.com/yoyoketang/")
page = Driver.page_source
# print Page
# "Non-greedy match, re." S ('. ') Match characters, including line breaks) "
Url_list = Re.findall (' href=\ ' (. *?) \ "', page, re. S
Url_all = []
For URL in url_list:
If "http" in URL:
Print URL
Url_all.append (URL)
# The final URL collection
Print Url_all
Selenium2+python Automation 37-Crawl page source code (PAGE_SOURCE)