One JavaScript instance every day-extract all links and add them to the list at the end of the page.
<! DOCTYPE html>
Everybody, how can I use javascript to retrieve all the content on the current page and save it to the specified directory?
You can use document.doc umentelement. outerhtml to obtain html.
You can use the fso = new actinvexobject ('wscript. filesystemobject ') object to save the file. This is a security issue and it is difficult to set the browser.
The specific requirements depend on the requirements and runtime environment.
In fact, the script has powerful functions, but it also has great limitations. It depends on the situation in which it is used.
I haven't used this for a long time. I can only provide you with one idea. You have to write the code yourself.
How to use javascript to read content from another page
To do this, you must first solve the cross-origin problem of the browser. That is to say. JavaScript under different URLs cannot be accessed directly. In other words, it is difficult to directly obtain the structure object of the target page in your webpage.
It cannot be obtained directly because. You can use xmlhttp objects or other technologies to obtain your target page. But the result is an html string. You need to parse the html string, and parse the html string is exactly what the browser wants to do. To put it simply, you need to implement a browser.
Of course. Some simple parser in java can parse simple html code. However, I cannot give a definite answer to your requirements.
Last trick: Add related scripts to your html string and combine them into new html strings. Then, the new html is thrown to the browser for parsing. Theoretically, you can parse any web page.