I recently learned about node. js and just saw the http module. So I was eager to write a simple crawler. The simple crawling principle is simple: Send an http request to the target address to obtain HTML page data, and extract the required data from the retrieved page data for saving. Using nodeJs to write crawlers mainly involves http. get sends the request to the target address, and then in res. on ("data"), listen to the data transmission and save the data, and finally in res. after the on ("end") data is transferred, the data is processed and saved. Let's start with the steps. I used the express framework. First, I went to the project directory and typed "express-e myCreeper" in the command line to generate the express directory. Enter the myCreeper directory and then run the npm install command. Then the project is set up. After the project is set up, you can start writing. The main functional code of the crawler is the following two sections: [front-end code] copy the Code <body>