A web crawler is a computer program that simulates the behavior of a human being using a browser to navigate a webpage to get the information it needs. This can save manpower and avoid the omission of information, more close to the estimate is to find the movie resources on the network. We have all tried to get the resources of some old movies, which are usually relatively small. We need to browse through the Web page to get the download address of the movie, and to select the address effectively. A web crawler can implement this process through a program that directly returns the final address to the user.
Because of the behavior of the simulation browser, we can better summarize the law of browser behavior, we write the crawler will be able to more accurately return the results we need. Currently, the crawl of Web pages is divided into two main situations:
one is that Web pages do not require special processing and can be accessed directly . Such a web page information can be crawled at any time, relatively simple, such as Baidu retrieval of information.
another is the need for special processing, such as the need to log in, or the number of visits per period of time limit, the need to load space and so on. This is relatively complex, and this needs to be crawled according to the specific situation.