Linux How to duplicate a request using wget (or curl) with raw headers?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/7618155/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-05 06:27:25  来源:igfitidea点击:

How to duplicate a request using wget (or curl) with raw headers?

linuxdebuggingunixwgetweb

提问by cwd

I was deubgging some http requests and found that I can grab request headers in this type of format:

I was deubgging some http requests and found that I can grab request headers in this type of format:

GET /download?123456:75b3c682a7c4db4cea19641b33bec446/document.docx HTTP/1.1
Host: www.site.com
User-Agent: Mozilla/5.0 Gecko/2010 Firefox/5
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Referer: http://www.site.com/dc/517870b8cc7
Cookie: lang=us; reg=1787081http%3A%2F%2Fwww.site.com%2Fdc%2F517870b8cc7

Is it possible or is there an easy way to reconstruct that request using wget or curl (or another CLI tool?)

Is it possible or is there an easy way to reconstruct that request using wget or curl (or another CLI tool?)

From reading the wget manual page I know I can set several of these things individually, but is there an easier way to send a request with all these variables from the command line?

From reading the wget manual page I know I can set several of these things individually, but is there an easier way to send a request with all these variables from the command line?

采纳答案by ajreal

Yes, you just need to combine all the headers using --header

Yes, you just need to combine all the headers using --header

wget --header="User-Agent: Mozilla/5.0 Gecko/2010 Firefox/5" \
--header="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" \
--header="Accept-Language: en-us,en;q=0.5" \
--header="Accept-Encoding: gzip, deflate"
--header="Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" \
--header="Cookie: lang=us; reg=1787081http%3A%2F%2Fwww.site.com%2Fdc%2F517870b8cc7" \
--referer=http://www.site.com/dc/517870b8cc7
http://www.site.com/download?123456:75b3c682a7c4db4cea19641b33bec446/document.docx

If you are trying to do some illegal download,
it might fail,
is depends on how hosting URL being programmed

If you are trying to do some illegal download,
it might fail,
is depends on how hosting URL being programmed

回答by Wenbing Li

Here is curlversion:

Here is curlversion:

curl http://www.example.com/download?123456:75b3c682a7c4db4cea19641b33bec446/document.docx \
-H "User-Agent: Mozilla/5.0 Gecko/2010 Firefox/5" \
-H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" \
-H "Accept-Language: en-us,en;q=0.5" \
-H "Accept-Encoding: gzip, deflate"
-H "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" \
-H "Cookie: lang=us; reg=1787081http%3A%2F%2Fwww.site.com%2Fdc%2F517870b8cc7" \
-H "Referer: http://www.example.com/dc/517870b8cc7"

In Chrome developer tools, you can use Copy as cURLto catch request as curl.

In Chrome developer tools, you can use Copy as cURLto catch request as curl.