An amazing use of the modifier in Python

Source: Internet
Author: User
This article mainly introduces a wonderful use of the Python modifier. This article summarizes it when writing a crawler program to define a modifier. if data is obtained before, directly retrieve the cache data. if you haven't obtained the data before, pull it from the website and store it in the cache. All right, I know it's midnight ......, However, I still think it is worthwhile to share the latest ideas in the first half of an hour ~ Go to the topic ~

To simulate a scenario, you need to capture a page. Then there are many URLs on the page to capture separately. after entering these suburls, there is still data to be captured. Simply put, let's look at the three layers. then our code is as follows:

The code is as follows:


Def func_top (url ):
Data_dict = {}

# Obtain the sub-url on the page
Sub_urls = xxxx

Data_list = []
For it in sub_urls:
Data_list.append (func_sub (it ))

Data_dict [\ 'data \ '] = data_list

Return data_dict

Def func_sub (url ):
Data_dict = {}

# Obtain the sub-url on the page
Bottom_urls = xxxx

Data_list = []
For it in bottom_urls:
Data_list.append (func_bottom (it ))

Data_dict [\ 'data \ '] = data_list

Return data_dict

Def func_bottom (url ):
# Getting data
Data = xxxx
Return data

Func_top is the processing function of the upper page, func_sub is the processing function of the sub page, func_bottom is the processing function of the deepest page, func_top will traverse and call func_sub after obtaining the url of the sub page, the same is true for func_sub.

Under normal circumstances, this is indeed enough to meet the needs, but the website you want to crawl may be extremely unstable and often cannot be linked, resulting in data not available.

At this time, you have two options:

1. stop when an error occurs, and re-run from the disconnected location
2. if an error occurs, continue, but run it again later. at this time, the existing data does not want to be pulled again on the website, but only the data that has not been retrieved.

The first scheme is basically not feasible, because if the URLs of other websites are adjusted in order, the recorded location will be invalid. There is only the second solution. to put it bluntly, we need to cache the obtained data and retrieve it directly from the cache when necessary.

OK. the target already exists. how can this problem be achieved?

If it is in C ++, this is a very troublesome thing, and the code written must be ugly, but fortunately, we use python, python has a modifier for functions.

So the implementation scheme is as follows:

Define a decorator. if the data is obtained before, the data in the cache is directly obtained. if the data is not obtained before, the data is pulled from the website and saved to the cache.

The code is as follows:

The code is as follows:


Def get_dump_data (dir_name, url ):
M = hashlib. md5 (url)
Filename = m. hexdigest ()
Full_file_name = \ 'dumps/% s \ '% (dir_name, filename)

If OS. path. isfile (full_file_name ):
Return eval (file (full_file_name, \ 'r \ '). read ())
Else:
Return None


Def set_dump_data (dir_name, url, data ):
If not OS. path. isdir (\ 'dumps/\ '+ dir_name ):
OS. makedirs (\ 'dumps/\ '+ dir_name)

M = hashlib. md5 (url)
Filename = m. hexdigest ()
Full_file_name = \ 'dumps/% s \ '% (dir_name, filename)

F = file (full_file_name, \ 'W + \')
F. write (repr (data ))
F. close ()


Def deco_dump_data (func ):
Def func_wrapper (url ):
Data = get_dump_data (func. _ name __, url)
If data is not None:
Return data

Data = func (url)
If data is not None:
Set_dump_data (func. _ name __, url, data)
Return data

Return func_wrapper


Then, we only need to add the deco_dump_data decorator to each func_top, func_sub, and func_bottom ~~

Done! The biggest advantage of this is that, because top, sub, bottom, and each layer will dump data, for example, after a sub layer data dump, it will not go to the corresponding bottom layer at all, reducing a lot of overhead!

OK ~ My life is short. I use python!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.