In the Bugscan group, people were asked to have a large file to detect if they existed. If you use curl, the entire file is downloaded to the node, which is useless for scanning and wastes scanning time.
So I think of the solution is not to use curl, directly with the underlying socket, the packet after receiving the HTTP head part, and then get the return code after the broken link. Do not know if there are any drawbacks, if any, please point out.
The following directly paste the source code:
#!/usr/bin/env python#-*-coding:utf-8-*-#__author__ = ' Medici.yan '#" "The test file is a download installation package provided by the WPS official online" "ImportSocketdefAssign (service, ARG):ifService = ="IP": returnTrue, ArgdefAudit (ARG): Doget (ARG,'/wdl1.cache.wps.cn/wps/download/w.p.s.4954.19.552.exe')defdoget (Host,path): Payload=" "GET%s Http/1.1\r\nhost:%s\r\nconnection:keep-alive\r\naccept:text/html,application/xhtml+xml,application/ xml;q=0.9,image/webp,*/*;q=0.8\r\nuser-agent:mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/39.0.2171.95 safari/537.36\r\naccept-encoding:gzip, deflate, SDCH \r\naccept-language:zh-cn,zh;q=0.8,en;q=0.6\r\n\r\n" "%(path,host)PrintPayload S=Socket.socket (Socket.af_inet,socket. SOCK_STREAM)Try: Socket.setdefaulttimeout (20)#timed outS.connect ((host,80))#connect the corresponding host and ports.send (payload) Data=S.recv (len (payload)) Httphead=data.split ('\ r \ n') if 'OK' inchHttphead[0]:Print 'exist' Else: Print 'error or not exist' exceptException:Pass finally: S.close ()if __name__=='__main__': fromDummyImport*Audit (Assign ('IP','222.178.202.37') [1])
Test results:
Detects if a large file exists in the specified location on the Web server