An amazing use of the Python modifier and a wonderful use of the Python Modifier
Okay, I know it's midnight ......, However, I still think it is worthwhile to share the latest ideas in the first half of an hour ~ Go to the topic ~
To simulate a scenario, you need to capture a page. Then there are many URLs on the page to capture separately. After entering these suburls, there is still data to be captured. Simply put, let's look at the three layers. Then our code is as follows:
Copy codeThe Code is as follows:
Def func_top (url ):
Data_dict = {}
# Obtain the sub-url on the page
Sub_urls = xxxx
Data_list = []
For it in sub_urls:
Data_list.append (func_sub (it ))
Data_dict [\ 'data \ '] = data_list
Return data_dict
Def func_sub (url ):
Data_dict = {}
# Obtain the sub-url on the page
Bottom_urls = xxxx
Data_list = []
For it in bottom_urls:
Data_list.append (func_bottom (it ))
Data_dict [\ 'data \ '] = data_list
Return data_dict
Def func_bottom (url ):
# Getting data
Data = xxxx
Return data
Func_top is the processing function of the upper page, func_sub is the processing function of the sub page, func_bottom is the processing function of the deepest page, func_top will traverse and call func_sub after obtaining the url of the sub page, the same is true for func_sub.
Under normal circumstances, this is indeed enough to meet the needs, but the website you want to crawl may be extremely unstable and often cannot be linked, resulting in Data not available.
At this time, you have two options:
1. Stop when an error occurs, and re-run from the disconnected location
2. If an error occurs, continue, but run it again later. At this time, the existing data does not want to be pulled again on the website, but only the data that has not been retrieved.
The first scheme is basically not feasible, because if the URLs of other websites are adjusted in order, the recorded location will be invalid. There is only the second solution. To put it bluntly, we need to cache the obtained data and retrieve it directly from the cache when necessary.
OK. The target already exists. How can this problem be achieved?
If it is in C ++, this is a very troublesome thing, and the code written must be ugly, but fortunately, we use python, python has a modifier for functions.
So the implementation scheme is as follows:
Define a decorator. If the data is obtained before, the data in the cache is directly obtained. If the data is not obtained before, the data is pulled from the website and saved to the cache.
The Code is as follows:
Copy codeThe Code is as follows:
Def get_dump_data (dir_name, url ):
M = hashlib. md5 (url)
Filename = m. hexdigest ()
Full_file_name = \ 'dumps/% s \ '% (dir_name, filename)
If OS. path. isfile (full_file_name ):
Return eval (file (full_file_name, \ 'r \ '). read ())
Else:
Return None
Def set_dump_data (dir_name, url, data ):
If not OS. path. isdir (\ 'dumps/\ '+ dir_name ):
OS. makedirs (\ 'dumps/\ '+ dir_name)
M = hashlib. md5 (url)
Filename = m. hexdigest ()
Full_file_name = \ 'dumps/% s \ '% (dir_name, filename)
F = file (full_file_name, \ 'W + \')
F. write (repr (data ))
F. close ()
Def deco_dump_data (func ):
Def func_wrapper (url ):
Data = get_dump_data (func. _ name __, url)
If data is not None:
Return data
Data = func (url)
If data is not None:
Set_dump_data (func. _ name __, url, data)
Return data
Return func_wrapper
Then, we only need to add the deco_dump_data decorator to each func_top, func_sub, and func_bottom ~~
Done! The biggest advantage of this is that, because top, sub, bottom, and each layer will dump data, for example, after a sub layer data dump, it will not go to the corresponding bottom layer at all, reducing a lot of Overhead!
OK ~ My life is short. I use python!