Today, a sister paper asked Wget How to use, and then look for information, Mark is here.
For Linux users, it is used almost every day.
Here are a few useful centos wget tips that will allow you to use CentOS wget more efficiently and flexibly.
CentOS wget Usage Tips
$ CentOS wget-r-np-nd http://example.com/packages/This command can download all the files in the packages directory on the http://example.com website. Where-NP's role is not to traverse the parent directory,-nd means that the directory structure is not recreated natively.
The CentOS wget-r-np-nd--accept=iso http://example.com/centos-5/i386/is similar to the previous command, but adds a--accept=iso option, which instructs CentOS wget to download only All files with an ISO extension in the I386 directory. You can also specify multiple extensions, just separated by commas.
$ centos wget-i filename.txt This command is often used for bulk downloads, putting all the addresses that need to be downloaded into filename.txt, and then CentOS wget automatically downloads all the files for you.
$ CentOS wget-c Http://example.com/really-big-file.iso The function of the-c option specified here is the continuation of the breakpoint.
$ CentOS wget-m-K (-h) http://www.example.com/The command can be used to mirror a Web site, and CentOS wget will convert the link. If the images in your site are placed on a different site, you can use the-H option.
CentOS wget User Guide
CentOS wget is a free tool that automatically downloads files from the web. It supports HTTP,HTTPS and FTP protocols and can use HTTP proxies. The so-called automatic download means that the CentOS wget can be executed in the background after the user exits the system. This means that you can log in to the system, start a centos wget download task, and then exit the system, and the CentOS wget will be executed in the background until the task is completed, which saves a lot of hassle when the user needs to be involved in downloading large amounts of data from most other browsers.
Wget can follow the links on the HTML page and download it to create a local version of the remote server, completely rebuilding the directory structure of the original site. This is often referred to as a "recursive download". In the recursive download, wget follows the robot Exclusion Standard (/robots.txt). wget can switch links to local files for offline browsing while downloading.
The wget is very stable, and it has a strong adaptability in the case of very narrow bandwidth and unstable networks. If the download fails because of the network, wget will keep trying until the entire file is downloaded. If the server interrupts the download process, it will again be linked to the server to continue downloading from where it stopped. This is useful for downloading large files from servers that have limited link times.
Common usage of wget wget use format
Usage:wget [OPTION] ... [URL] ...
Do site mirroring with wget: Wget-r-p-np-k http://dsec.pku.edu.cn/~usr_name/# or wget-m http://www.tldp.org/LDP/abs/html/ Download a partially downloaded file on an unstable network and download it during idle hours
Wget-t 0-w 31-c http://dsec.pku.edu.cn/BBC.avi-o down.log or read from filelist list of files to download
Wget-t 0-w 31-c-B ftp://dsec.pku.edu.cn/linuxsoft-i filelist.txt-o Down.log &
The above code can also be used to download during periods when the network is relatively idle. My usage is: in Mozilla will not be convenient to download the URL link is copied into memory and then pasted into the file Filelist.txt, in the evening to go out of the system before the execution of the above code of the second article.
Use agent download Wget-y on-p-K https://sourceforge.net/projects/wvware/Proxy can be set in environment variable or WGETRC file # Set proxy in environment variable export proxy=http:// 211.90.168.94:8080/# setting up agents in ~/.WGETRC
Http_proxy = http://proxy.yoyodyne.com:18023/
Ftp_proxy = http://proxy.yoyodyne.com:18023/
wget various options Category list start
-V,--version displays wget version after exiting
-H,--help print syntax Help
-B,--background boot to background execution
-E,--execute=command execute '. Wgetrc ' Format command, WGETRC format see/ETC/WGETRC or ~/.WGETRC
record and input files
-O,--output-file=file writes the record to file
-A,--append-output=file append the record to the file
-D,--debug print debug output
-Q,--quiet quiet mode (no output)
-V,--verbose verbose mode (this is the default setting)
-NV,--non-verbose turn off verbose mode, but not quiet mode
-I,--input-file=file download URLs that appear in file files
-F,--force-html treats the input file as an HTML format file
-B,--base=url the URL as the prefix for the relative link that appears in the file specified by the-f-i parameter
--sslcertfile=file Optional Client certificate
--sslcertkey=keyfile Optional Client certificate keyfile
--EGD-FILE=FILE Specifies the file name of the EGD socket
Download
--bind-address=address specifies the local use address (host name or IP, used when there are multiple IPs or names locally)
-T,--tries=number sets the maximum number of attempts to link (0 means no limit).
-O--output-document=file write the document to file
-NC,--no-clobber do not overwrite existing files or use. #前缀
-C,--continue then download the files that are not finished downloading
--progress=type Setting the Process bar flag
-N,--timestamping do not download the file again unless it is newer than the local file
-S,--server-response print server response
--spider don't load anything.
-T,--timeout=seconds sets the number of seconds to response timeout
-W,--wait=seconds interval SECONDS seconds between two attempts
--waitretry=seconds wait between Relink 1 ... Seconds sec
--random-wait wait between downloads 0 ... 2*wait sec
-Y,--proxy=on/off turn agent on or off
-Q,--quota=number set the download capacity limit
--limit-rate=rate Limit Download Transmission rate
Catalogue
-nd--no-directories do not create a directory
-X,--force-directories Force Create directory
-NH,--no-host-directories do not create host directory
-P,--directory-prefix=prefix save file to directory prefix/...
--cut-dirs=number Ignore number layer remote directory
HTTP Options
--http-user=user set the HTTP username to user.
--http-passwd=pass set the HTTP password to pass.
-C,--cache=on/off allows/does not allow server-side data caching (generally allowed).
-E,--html-extension saves all text/html documents with the. html extension
--ignore-length Ignore ' content-length ' header fields
--header=string inserting strings in headers string
--proxy-user=user set the user name of the agent
--proxy-passwd=pass set the password for the agent to PASS
--referer=url include ' Referer:url ' header in HTTP request
-S,--save-headers save HTTP header to file
-U,--user-agent=agent sets the agent's name as agent instead of wget/version.
--no-http-keep-alive Close the HTTP activity link (forever link).
--cookies=off does not use cookies.
--load-cookies=file loading a cookie from a file before starting a session
--save-cookies=file cookies are saved to the file after the session ends
FTP Options
-NR,--dont-remove-listing do not remove '. Listing ' file
-G,--glob=on/off globbing mechanism for opening or closing filenames
The--PASSIVE-FTP uses the passive transfer mode (the default value).
--active-ftp using active transfer mode
--retr-symlinks the link to the file (not the directory) at the time of recursion
Recursive download
-R,--recursive recursive download--use with caution!
-L,--level=number the maximum recursive depth (INF or 0 for Infinity).
--delete-after Delete files locally after it is finished
-K,--convert-links convert non-relative links to relative links
-K,--backup-converted back to X.orig before converting file X
-M,--mirror equivalent to-r-n-l INF-NR.
-P,--page-requisites download all pictures showing HTML files
included and not included in the recursive download (accept/reject)
-A,--accept=list semicolon-delimited list of accepted extensions
-R,--reject=list semicolon-delimited list of non-accepted extensions
-D,--domains=list semicolon-delimited list of accepted domains
--exclude-domains=list semicolon-delimited list of domains that are not accepted
--follow-ftp Tracking of FTP links in HTML documents
--follow-tags=list a semicolon-delimited list of tracked HTML tags
-G,--ignore-tags=list a semicolon-delimited list of ignored HTML tags
-H,--span-hosts go to external host when recursion
-L,--relative only tracks relative links
-I,--include-directories=list list of allowed directories
-X,--exclude-directories=list list of directories not included
-NP,--no-parent don't go back to the parent directory
Note: To stop the download, CTRL + C.
Summary: CentOS wget is a free tool for downloading files automatically from the Web. It supports HTTP,HTTPS and FTP protocols and can use HTTP proxies. The so-called automatic download means that the CentOS wget can be executed in the background after the user exits the system. This means that you can log in to the system, start a centos wget download task, and then exit the system, and the CentOS wget will be executed in the background until the task is completed, which saves a lot of hassle when the user needs to be involved in downloading large amounts of data from most other browsers.
How to use wget in CentOS