Original address: http://blog.sina.com.cn/s/blog_4b9eab320100slyw.html
Can be viewed as a command line browser
1. Open the GZIP request
Curl-i http://www.sina.com.cn/-H accept-encoding:gzip,defalte
2, monitoring the response time of the Web page
Curl-o/dev/null-s-W "time_connect:%{time_connect}\ntime_starttransfer:%{time_starttransfer}\ntime_total:%{time_ total}\n "" Http://www.kklinux.com "
3. Monitoring of Site availability
Curl-o/dev/null-s-W%{http_code} "http://www.kklinux.com"
4. Request with http1.0 protocol (default is http1.1)
Curl-0 ........
1) Read Web page
$ Curl Linuxidc.com ">http://www.linuxidc.com
2) Save Web page
$ Curl http://www.linuxidc.com > page.html $ curl-o page.html http://www.linuxidc.com
3) Used proxy server and its port:-X
$ curl-x 123.45.67.89:1080-o page.html http://www.linuxidc.com
4) Use cookies to record session information
$ curl-x 123.45.67.89:1080-o page.html-d cookie0001.txt http://www.linuxidc.com
Option:-D is to store the cookie information in the HTTP response in a special file, so that when the page is saved to page.html, the cookie information is stored in the cookie0001.txt.
5) So, how do you continue to use the cookie information you left when you next visit?
Use option to append the last cookie information to the HTTP request:-B
$ curl-x 123.45.67.89:1080-o page1.html-d cookie0002.txt-b cookie0001.txt http://www.linuxidc.com
6) Browser Information ~ ~ ~
Feel free to specify your own browser information as claimed by this visit:-A
Curl-a "mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0) "-X 123.45.67.89:1080-o page.html-d cookie0001.txt http://www.yahoo.com
In this way, the server side to the requirements of access, will think you are a running on Windows 2000 IE6.0, hehe Hey, you may actually use an apple machine.
and "mozilla/4.73 [en] (X11; U Linux 2.2; i686 "You can tell each other that you are running Linux on a PC with the Netscape 4.73, huh huh?
7)
Another common limiting method on a server side is to check HTTP access referer. For example, you first visit the homepage, and then visit the download page specified in the, this second access to the Referer address is the first successful visit to the page
Access. In this way, the server side as long as the download page to find the Referer address of a visit is not the address of the home, it can be concluded that it is a thief ~~~~~
Nasty Nasty ~ ~ ~ I just want to steal even ~~~~~.
Fortunately, Curl gives us the option to set Referer:-E
Curl-a "mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0) "-X 123.45.67.89:1080-e" mail.yahoo.com "-O page.html-d cookie0001.txt http://www.yahoo.com
In this way, you can cheat each other's server, you are from the mail.yahoo.com click on a chain took over, oh hehe
8) Curl Download file
Just now, download the page to a file, you can use-o, download the same file.
For example, Curl-o 1.jpg Http://cgi2.tky.3web.ne.jp/~zzh/screen1.JPG
Here we teach you a new option:-O
Uppercase O, so used: Curl-o Http://cgi2.tky.3web.ne.jp/~zzh/screen1.JPG
This way, you can follow the file name on the server and automatically exist locally.
Another better use.
If Screen1. There are also screen2.jpg and screen3 outside of JPG. JPG 、....、 Screen10. JPG needs to be downloaded, it is difficult to let us write a script to complete these operations.
Not dry.
In curl, this is the way to write:
Curl-o http://cgi2.tky.3web.ne.jp/~zzh/screen[1-10]. Jpg
Oh oh, it's bad. ~~~
9)
Come again, we continue to explain the download.
Curl-o http://cgi2.tky.3web.ne.jp/~{zzh,nick}/[001-201]. Jpg
The resulting download is
~zzh/001.jpg
~zzh/002.jpg
...
~zzh/201.jpg
~nick/001.jpg
~nick/002.jpg
...
~nick/201.jpg
It's convenient enough. Ha ha haha
Hey. It's too early to be happy.
Because the Zzh/nick under the file name are 001,002...,201, downloaded files with the same names, the back of the previous files are covered out ~ ~ ~
It doesn't matter, we have more ruthless.
Curl-o #2_ #1.jpg http://cgi2.tky.3web.ne.jp/~{zzh,nick}/[001-201]. Jpg
-This is ..... The download of the custom file name.
--the enemy, hehe.
#1是变量, referring to the {Zzh,nick} section, the first value Zzh, and the second time, Nick
#2代表的变量, it is the second variable part---[001-201], the value is added from 001 to 201
In this way, the name of the downloaded file is customized, and it becomes this:
Original: ~zzh/001.jpg---> After download: 001-zzh. Jpg
Original: ~nick/001.jpg---> After download: 001-nick. Jpg
In this way, not afraid of the file name, hehe
9)
Continue to download
We usually on the Windows platform, flashget such a tool can help us to block parallel download, but also can be disconnected to continue transmission.
Curl in these areas also do not lose to who, hehe
For example, we download screen1. JPG, and suddenly dropped, so we can start the continuation.
Curl-c-O Http://cgi2.tky.3wb.ne.jp/~zzh/screen1.JPG
Of course, you don't have to take a flashget download half of the files to fool me ~ ~ ~ ~ half of the other download software files may not be able to use OH ~ ~ ~
Block download, we can use this option:-R
Examples Show
For example, we have a Http://cgi2.tky.3web.ne.jp/~zzh/zhao1.mp3 to download (Miss Zhao's telephone recitation:D )
We can use this command:
Curl-r 0-10240-o "Zhao.part1" Http:/cgi2.tky.3web.ne.jp/~zzh/zhao1.mp3 &\
Curl-r 10241-20480-o "Zhao.part1" Http:/cgi2.tky.3web.ne.jp/~zzh/zhao1.mp3 &\
Curl-r 20481-40960-o "Zhao.part1" Http:/cgi2.tky.3web.ne.jp/~zzh/zhao1.mp3 &\
Curl-r 40961--O "Zhao.part1" Http:/cgi2.tky.3web.ne.jp/~zzh/zhao1.mp3
This will allow you to download the tiles.
But you need to merge the broken files yourself.
If you use UNIX or apples, use cat zhao.part* > Zhao.mp3 to
If the use of Windows, with copy/b to solve it, hehe
The above is the HTTP protocol download, in fact, FTP can also be used.
Usage,
Curl-u name:passwd Ftp://ip:port/path/file
or familiar to everyone.
Curl Ftp://name:passwd@ip:port/path/file
10) the option to upload is-t
For example, we send a file to ftp: curl-t localfile-u name:passwd ftp://upload_site:port/path/
Of course, uploading files to an HTTP server can also
Like Curl-t LocalFile http://cgi2.tky.3web.ne.jp/~zzh/abc.cgi.
Note that at this time, the protocol used is the put method of HTTP
Just said put, hey, naturally let the old suit think up the other several methos haven't said yet.
Neither get nor post can be forgotten.
HTTP submits a form, more commonly used is the post mode and get mode
Get mode what option is not needed, just write the variable in the URL.
Like what:
Curl http://www.yahoo.com/login.cgi?user=nickwolfe&password=12345
The option for Post mode is-D
For example, curl-d "user=nickwolfe&password=12345" http://www.yahoo.com/login.cgi
is equivalent to issuing a login request to this site ~~~~~
In the end to use the Get mode or post mode, the opposite server to see the program settings.
It is important to note that file uploads on files in post mode, such as
<form method= "POST" enctype= "Multipar/form-data" action= "http://cgi2.tky.3web.ne.jp/~zzh/up_file.cgi" >
<input Type=file name=upload>
<input type=submit name=nick value= "Go" >
</form>
Such an HTTP form, which we want to emulate with curl, is the syntax:
Curl-f upload= @localfile-F Nick=go http://cgi2.tky.3web.ne.jp/~zzh/up_file.cgi
Ro-Ro said so much, actually curl has a lot of tricks and usage
For example, when you use local certificates for HTTPS, you can
CURL-E Localcert.pem Https://remote_server
For example, you can also use curl to check the dictionary through the Dict protocol ~~~~~
Curl Dict://dict.org/d:computer
Today in order to check all the Hedgehog host all the domain name on the record. In the case of using wget, I found Curl, the command-line Flow command. It's nice to find that it's a call to post. Particularly conducive to the submission of information and change
More parameters for more tests. It is very useful for me to verify if there is a record of the hundreds of thousands of domain name to miibeian.gov.cn. Found this article is very good, specially for the repost.
My goal:
Curl-d "cxfs=1&ym=xieyy.cn" Http://www.miibeian.gov.cn/baxx_cx_servlet
Filter out the information, extract the record number information, and set an identity bit. Put the domain name, record number and identification bit into the library
Using the Curl command, post submits data with spaces
Today, in a situation, I want to use curl to login a webpage, inadvertently found to post data with a space. For example, the user name is "abcdef", the password is "ABC Def", which has a space, according to my previous way to submit:
Curl-d cookie-d "Username=abcdef&password=abc def" http://login.xxx.com/prompt for login failure.
So look at the Curl Manual man curl. Found it:
D/--data (HTTP) sends the speci?ed data in a POST request to the HTTP server, in a-so can emulate as if a user has? Lled in a HTML form and pressed the
Submit button. Note that the data was sent exactly as speci?ed with no extra processing (with all newlines cut off). The data is expected to being "url-encoded".
This would cause curl to pass the data to the server using the Content-type application/x-www-form-urlencoded. Compare To-f/--form. If This option is used
More than once in the same command line, the data pieces speci?ed would be merged together with a separating &-letter. Thus, using '-D name=daniel-d
Skill=lousy ' would generate a post chunk that looks like ' name=daniel&skill=lousy '.
Then instead:
curl-d cookie-d "Username=abcdef"-D "password=abc EFG" http://login.xxx.com/so that it can be successfully logged in. (Editor: The Night)
Curl is the next most powerful HTTP command-line tool in Linux, and it's very powerful.
1) Apart, let's start from here.
$ Curl Http://www.linuxidc.com
After the carriage return, the www.linuxidc.com HTML is splinters on the screen ~
2) Well, if you want to save the reading page, do you want to do this?
$ Curl http://www.linuxidc.com > page.html
Sure, but don't be so troublesome.
With Curl's built-in option, save the HTTP result, using this option:-O
$ Curl-o page.html http://www.linuxidc.com
This way, you can see a download page progress indicator appears on the screen. When we get to 100%, it's OK.
3) What what? Not accessible. It must be that your proxy has not been set.
Using Curl, this option allows you to specify the proxy server and its port to use for HTTP access:-X
$ curl-x 123.45.67.89:1080-o page.html http://www.linuxidc.com
4) It's annoying to visit some websites, and he uses cookies to record session information.
Browsers like Ie/nn, of course, can easily handle cookie information, but our curl does. .....
Let's learn this. Option:-D <-This is to put the cookie information inside the HTTP response into a special file.
$ curl-x 123.45.67.89:1080-o page.html-d cookie0001.txt http://www.linuxidc.com
This way, when the page is saved to page.html, the cookie information is stored in the cookie0001.txt.
5) So, how do you continue to use the cookie information you left when you next visit? Be aware that many websites rely on monitoring your cookie information to determine whether you are not visiting their website by the rules.
This time we use this option to append the last cookie information to the HTTP request:-B
$ curl-x 123.45.67.89:1080-o page1.html-d cookie0002.txt-b cookie0001.txt http://www.linuxidc.com
In this way, we can almost simulate all the IE operation, to visit the Web page.
6) Wait a minute ~ I seem to forget something ~
That's right. is browser information
Some annoying sites always want us to use certain browsers to access them, sometimes even more so, to use certain versions of NND, where there is time to find these weird browsers for it.
Fortunately, Curl provides us with a useful option that allows us to arbitrarily specify our own browser information as claimed by this visit:-A
$ curl-a "mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0) "-X 123.45.67.89:1080-o page.html-d cookie0001.txt http://www.linuxidc.com
In this way, the server side to the requirements of access, will think you are a running on Windows 2000 IE6.0, hehe Hey, you may actually use an apple machine.
and "mozilla/4.73 [en] (X11; U Linux 2.2; i686 "You can tell each other that you are running Linux on a PC with the Netscape 4.73, huh huh?
7) Another common restriction method on the server side is to check HTTP access referer. For example, you first visit the homepage, and then access the download page specified in the, this second access to the Referer address is the first successful access to the page address. In this way, the server side as long as the download page to find the Referer address of a visit is not the address of the homepage, it can be concluded that it is a stolen company ~
Hate hate ~ I just want to steal even ~.
Fortunately, Curl gives us the option to set Referer:-E
$ curl-a "mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0) "-X 123.45.67.89:1080-e" mail.linuxidc.com "-O page.html-d cookie0001.txt http://www.linuxidc.com
In this way, you can cheat each other's server, you are from the mail.linuxidc.com click on a chain took over, oh hehe
8) Write a note that says something important is missing. ——-use curl to download files
Just now, download the page to a file, you can use-o, download the same file. Like what
$ Curl-o 1.jpg Http://cgi2.tky.3web.ne.jp/~zzh/screen1.JPG
Here we teach you a new option:-O caps, so use:
$ Curl-o Http://cgi2.tky.3web.ne.jp/~zzh/screen1.JPG
This way, you can follow the file name on the server and automatically exist locally.
Another better use.
If Screen1. There are also screen2.jpg and screen3 outside of JPG. JPG 、....、 Screen10. JPG needs to be downloaded, it is difficult to let us write a script to complete these operations.
Not dry.
In curl, this is the way to write:
$ Curl-o http://cgi2.tky.3web.ne.jp/~zzh/screen[1-10]. Jpg
Oh oh, it's bad. ~
9) Again, we continue to explain the download.
$ Curl-o http://cgi2.tky.3web.ne.jp/~{zzh,nick}/[001-201]. Jpg
The resulting download is
~zzh/001.jpg
~zzh/002.jpg
...
~zzh/201.jpg
~nick/001.jpg
~nick/002.jpg
...
~nick/201.jpg
It's convenient enough. Ha ha haha