How to capture web page information using Python [1]: Capture web page information using python
We will take the information of two websites as an example to illustrate how python can capture static page information on the Internet. This section describes how to build a capture framework.
1. Create a configuration file to record the task information of the website to be crawled and name it config. json.
{ "jobs":[ { "url": "http://123.msn.com/", "id": "123msn", "encoding": "utf-8" }, { "url": "http://news.sina.com.cn/", "id": "news.sina", "encoding": "gb2312" } ]}
2. Create a python file for each capture task, named "crawler_msn.py" and "crawler_sina.py" respectively, with the following content:
def crawl(args): website_url = args['url'] print('[Message] Start crawling for ' + website_url) print('[Message] TODO - Implement the crawl logic here...')
3. Create a crawler. py with the following content:
import osimport sysfrom datetime import datetimeimport jsonimport crawler_sinaimport crawler_msnconfiguration_file = "config.json"if __name__ == '__main__': print('This is the main module!') os.chdir(sys.path[0]) file_configs = file(configuration_file) # Create a root folder for crawled data if not exists dataRoot = '.\\data\\' if not os.path.exists(dataRoot): os.mkdir(dataRoot) os.chdir(dataRoot) # Create a sub-folder for crawled data as per day path = str(datetime.today().date()) if not os.path.exists(path): os.mkdir(path) os.chdir(path) # Creating crawler dict to include crawler driver crawler_Dict = dict() crawler_Dict['news.sina'] = crawler_sina crawler_Dict['123msn'] = crawler_msn # Reading crawler jobs info data = json.load(file_configs) # Executing crawler for job in data['jobs']: print(crawler_Dict[job['id']]) reload(sys) sys.setdefaultencoding(job['encoding']) crawler_Dict[job['id']].crawl(job)
In this way, a captured framework is ready. Running crawler. py will display the printed information and create a data folder in the path where the crawler. py file is located to store the captured data.
Next, let's take a look at how to capture part of the webpage content on 123msn.