How to capture web page information using Python [1]: Capture web page information using python

Source: Internet
Author: User

How to capture web page information using Python [1]: Capture web page information using python

We will take the information of two websites as an example to illustrate how python can capture static page information on the Internet. This section describes how to build a capture framework.

1. Create a configuration file to record the task information of the website to be crawled and name it config. json.

{  "jobs":[    {      "url": "http://123.msn.com/",      "id": "123msn",      "encoding": "utf-8"    },    {      "url": "http://news.sina.com.cn/",      "id": "news.sina",      "encoding": "gb2312"    }  ]}

2. Create a python file for each capture task, named "crawler_msn.py" and "crawler_sina.py" respectively, with the following content:

def crawl(args):    website_url = args['url']    print('[Message] Start crawling for ' + website_url)    print('[Message] TODO - Implement the crawl logic here...')

3. Create a crawler. py with the following content:

import osimport sysfrom datetime import datetimeimport jsonimport crawler_sinaimport crawler_msnconfiguration_file = "config.json"if __name__ == '__main__':    print('This is the main module!')    os.chdir(sys.path[0])    file_configs = file(configuration_file)    # Create a root folder for crawled data if not exists    dataRoot = '.\\data\\'    if not os.path.exists(dataRoot):        os.mkdir(dataRoot)    os.chdir(dataRoot)    # Create a sub-folder for crawled data as per day    path = str(datetime.today().date())    if not os.path.exists(path):        os.mkdir(path)    os.chdir(path)    # Creating crawler dict to include crawler driver    crawler_Dict = dict()    crawler_Dict['news.sina'] = crawler_sina    crawler_Dict['123msn'] = crawler_msn    # Reading crawler jobs info    data = json.load(file_configs)    # Executing crawler    for job in data['jobs']:        print(crawler_Dict[job['id']])        reload(sys)        sys.setdefaultencoding(job['encoding'])        crawler_Dict[job['id']].crawl(job)

In this way, a captured framework is ready. Running crawler. py will display the printed information and create a data folder in the path where the crawler. py file is located to store the captured data.

 

Next, let's take a look at how to capture part of the webpage content on 123msn.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.