In the actual project used several times Mass download this program, found that download ABAP code is really a weapon, the current version is 1.4.4, has not been updated for n years. In the process of use, found that its exported HTML format of the code is problematic, including incorrect navigation links and incorrect code color, I made a correction to its code.: Http://u.115.com/file/aqz7qxn0 How to use: 1.SE38 or SE80, create a program, named Zdtp_massdownload.2. Open the downloaded source TX
Signaturecompressed.writebytes (data, 8); Read the rest, compressedcompressed.uncompress ();d ecompressed.writemultibyte ("FWS", "us-ascii"); Mark as Uncompresseddecompressed.writebytes (header); Write the header backdecompressed.writebytes (compressed); Write the now uncompressed Contentreturn decompressed;}}Compiled tools can be found in the CSDN Download Center, search doc88_cracker_jimbowhy can be obtained. The tool uses the following, resulting SWF file contents see:Note: My hack work, can
The PDF file in the server is healthy: Verified by direct FTP download.
Some of the code for the downloader is as follows:
$file _size = filesize ($filedir);
Header ("content-type:application/octet-stream;charset=iso-8859-1");
Header ("Cache-control:private");
Header ("Accept-ranges:bytes");
Header ("Accept-length:". $this->fjsize);
Header ("content-disposition:attachment; Filename= ". $this->delfilename);
$fp = fopen ($filedir, "R");
$buffer _size
-rate=amountLimit download speed-Q,--quota=numberSet capacity limits for downloadsInstanceA) download a single filewget Http://jsoup.org/packages/jsoup-1.8.1.jarb) Download the file and specify the filenameWget-o example.html http://www.example.com/c) The URL specified in the download fileWget-i Imglist.txtd) Limit Download speedwget--limit-rate 3K Http://jsoup.org/packages/jsoup-1.8.1.jare) Breakpoint DownloadWget-c Http://jsoup.org/packages/jsoup-1.8.1.jarf) Determine if the resource existswge
the use of Sequre ok3, relatively cool. Specific steps, need to request which parameters, we can go to see Zhao Four gods of the article, written very clear, although it is written in Python, I believe you can read.
CRC32 algorithmThis I felt very difficult to write, did not expect the Java Dafa interior comes with, very good has no:
"hello world";CRC32 crc32 = new CRC32();crc32.update(temp.getBytes());Log.d"crc32 code : " + crc32.getValue()));It's so simple.4. Video playback Soft
Sometimes want to download a few small games for fun, but to use a dedicated download, such as the following this, a little disgusting, speed has been on the go.
Open Task Manager, click "View" and select Show "PID (process identifier)"
In Task Manager, you can see two processes associated with this downloader.
Let's start with Winhex to open MiniQQDL.exe This process, process PID is 3340
Find the pr
Today, we do a demo of the Downloader, which is to download the specified file from the locally configured Apache server. This time, we download the Html.mp4 file under the server root directory. By convention, we first create a URL object and a request. Nsurl *url = [Nsurl urlwithstring:@ " http://127.0.0.1/html.mp4 "]; Nsurlrequest *request = [Nsurlrequest Requestwithurl:url]; Here are two points to note, first, the string of this URL is al
several times, you can complete, such as only download the last failed image.
The demo is similar to Google Map and other maps. I used the actual project to download from the Google Map of these tiles, imitating the Google map display principle to do a map display module.
:Figure 4 Demo Main interfaceFigure 5 Download completed tilesFigure 6 tiling tiles into a large pictureSource: Http://files.cnblogs.com/xiaozhi_5638/GoogleMapDownLoader.rarVS2010 Win7 Debugging pass, hope to have hel
Today, I did a demo of a downloader that downloads the specified file from the locally configured Apache server. This time, we download the Html.mp4 file under the root directory of the server.By convention, we first create a URL object and a request.Nsurl *url = [Nsurl urlwithstring:@ "http://127.0.0.1/html.mp4"*request = [ Nsurlrequest Requestwithurl:url];Here are two points to note, first, the URL of the string is all in English, if the string appe
more meaningful to remember something carefully than to be self-intoxicated during the download process.I have mastered scientific learning methods, and have the courage and perseverance. I believe that I can do what I do. I can truly turn my real "from a crazy Downloader to a learner ", then you will be ambitious.
Opinion 1:Materials are actually useless to people. If we do not learn and apply them to reality, we will produce benefits. Let's ta
("scripting. FileSystemObject ")
Set file = FSO. opentextfile (ARG (0) ". htm", 2, true)
File. Write bin2str (SS)
File. Close
Set FSO = nothing
Ado. Close
Set ABO = nothing
Function bin2str (re)
For I = 1 to lenb (re)
Bt = ASCB (midb (Re, I, 1 ))
If BT Bin2str = bin2str hex (BT)
Next
End Function
==============================================Downloader down. vbs==================Copy codeThe Code is as follows: on error resume nextSet Arg = wscript
, then the level will increase the number of words.
There is no free lunch in the world, and this sentence is a good one. When we were complacent about downloading a few g and dozens of G free English documents, when we keep searching for more information, we have lost the most valuable thing, that is, time, that is, life. How many times can we beat in our life? If we waste an hour, we will lose one in our lives. It would have been a waste of time for us to do a lot of meaningless things in our
components (need to manually find downloaded xx[1].htm):
Copy Code code as follows:
Window.moveto 4000,4000
Window.resizeto 0,0 ' makes an HTA invisible
Set xml=document.createelement ("xml") ' Build-side XML element invokes IE's default behavior
Xml.addbehavior ("#default #download")
Xml.async=0
Xml.load ("http://192.168.0.10/xx.htm") '
Window.close
Vii. Shortcomings
My HTA downloader is not perfect. The first is
Recently in the crawler, sometimes crawling down a lot of interesting file connections.If you download them manually, the workload is too great.So, simply wrote a download small script:ImportOS, Urllib2os.chdir (R'D:') URL='http://image16-c.poco.cn/mypoco/myphoto/20140826/09/5295255820140826091556057_640.jpg'Print 'Downloading'Data=urllib2.urlopen (URL). Read ()Print 'Saving'F= Open ('girl.jpg','WB') f.write (data)Print 'Finish'f.close ()You can then consider joining multi-threaded download supp
Previously wrote a python implementation of the Baidu new song list, hot song List Downloader blog, to achieve the Baidu new song, popular songs Crawl and download. However, the use of a single-threaded, network conditions in general, the first 100 songs to scan the time to get about 40 seconds. and using the PYQT interface, in the process of downloading the window operation, there will be UI blocking phenomenon.The first two days have time to adjust
There are a lot of VBS downloads, and I'm here a great invention, using Cdo.message to do the VBS downloader. Greatness is the meaning of B.NP first write the code, see here for details: http://hi.baidu.com/vbs_zone/blog/item/f254871382e6d0045aaf5358.htmlLCX found the CDO when he wrote his blog backup script. Message can be accessed by Web downloads, which is said to be a research study that may be used as a download.So I studied for a while. Write a
, how do you write a downloader? Like the previous process, get the URL first, then use the requests module to download, and then save the file. So here's the problem, if we download the file is too large, such as I used to download the file on the Baidu network before, the effect is very good, a thread 100kb/s, open 20 threads, you can reach the 2m/s, very useful, but encountered a problem is that the file is too large, if the data, Now download and
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.