"Disclaimer: Copyright All, welcome reprint, do not use for commercial purposes. Contact mailbox: feixiaoxing @163.com "
Right now, there's a lot of code crawling about Web pages online. However, since I read the Web download code of the Go language, I found that the original page download code is the simplest. If you don't believe it, you can look at it.
Package main import ( "FMT" "Log" " net/http" " OS") Func main () { resp,err:=http. Get ("http://www.baidu.com") if err!=nil{ //handleerror fmt. PRINTLN (Err) log. Fatal (Err) } defer resp. Body.close () if resp. Statuscode==http. statusok{ FMT. Println (resp. StatusCode) } buf:=make ([]byte,1024) //createfile F,err1:=os. OpenFile ("baidu.html", OS. O_rdwr|os. O_create|os. O_append,os. Modeperm) If err1!=nil{ panic (err1) return } defer f.close () for{ N,_:=resp. Body.read (BUF) if 0==n{ break } f.writestring (String (buf[:n)) }}
in this, you can notice that these functions are available, each of which is HTTP. Get,os. Openfile,resp. Body.read,f.writestring. Can imagine what these functions are for. In fact, as their names show, their functions are HTTP downloading, creating files, reading characters, and writing files. I don't know if everyone answered correctly. Interested friends can copy this part of the code and test it.
Assuming I'm not mistaken, this part of the code is referenced elsewhere. The author is welcome to contact me and I will comment later.
Fantastic go language (Web download)