C# 使用 HttpWebResponse 读取“分块”响应

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/16998/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-01 08:56:57  来源:igfitidea点击:

Reading "chunked" response with HttpWebResponse

提问by Craig

I'm having trouble reading a "chunked" response when using a StreamReader to read the stream returned by GetResponseStream() of a HttpWebResponse:

使用 StreamReader 读取由 HttpWebResponse 的 GetResponseStream() 返回的流时,我在读取“分块”响应时遇到问题:

// response is an HttpWebResponse
StreamReader reader = new StreamReader(response.GetResponseStream());
string output = reader.ReadToEnd(); // throws exception...

When the reader.ReadToEnd()method is called I'm getting the following System.IO.IOException: Unable to read data from the transport connection: The connection was closed.

reader.ReadToEnd()调用该方法时,我收到以下 System.IO.IOException:无法从传输连接读取数据:连接已关闭。

The above code works just fine when server returns a "non-chunked" response.

当服务器返回“非分块”响应时,上面的代码工作得很好。

The only way I've been able to get it to work is to use HTTP/1.0 for the initial request (instead of HTTP/1.1, the default) but this seems like a lame work-around.

我能够让它工作的唯一方法是对初始请求使用 HTTP/1.0(而不是 HTTP/1.1,默认值),但这似乎是一个蹩脚的解决方法。

Any ideas?

有任何想法吗?



@Chuck

@查克

Your solution works pretty good. It still throws the same IOExeception on the last Read(). But after inspecting the contents of the StringBuilder it looks like all the data has been received. So perhaps I just need to wrap the Read() in a try-catch and swallow the "error".

您的解决方案效果很好。它仍然在最后一次 Read() 上抛出相同的 IOExeception。但是在检查 StringBuilder 的内容之后,看起来所有数据都已收到。所以也许我只需要在 try-catch 中包装 Read() 并吞下“错误”。

采纳答案by Chuck

Haven't tried it this with a "chunked" response but would something like this work?

还没有用“分块”响应尝试过这个,但是这样的事情会起作用吗?

StringBuilder sb = new StringBuilder();
Byte[] buf = new byte[8192];
Stream resStream = response.GetResponseStream();
string tmpString = null;
int count = 0;
do
{
     count = resStream.Read(buf, 0, buf.Length);
     if(count != 0)
     {
          tmpString = Encoding.ASCII.GetString(buf, 0, count);
          sb.Append(tmpString);
     }
}while (count > 0);

回答by Chuck

Craig, without seeing the stream you're reading it's a little hard to debug but MAYBE you could change the setting of the count variable to this:

克雷格,没有看到你正在阅读的流,调试有点困难,但也许你可以将计数变量的设置更改为:

count = resStream.Read(buf, 0, buf.Length-1);

It's a bit of a hack, but if the last read is killing you and it's not returning any data then theoretically this will avoid the problem. I still wonder why the stream is doing that.

这有点像黑客,但如果最后一次读取正在杀死您并且它没有返回任何数据,那么理论上这将避免这个问题。我仍然想知道为什么流会这样做。

回答by Liam Corner

I've had the same problem (which is how I ended up here :-). Eventually tracked it down to the fact that the chunked stream wasn't valid - the final zero length chunk was missing. I came up with the following code which handles both valid and invalid chunked streams.

我遇到了同样的问题(这就是我在这里结束的方式:-)。最终将其归结为分块流无效的事实-最终的零长度块丢失了。我想出了以下代码来处理有效和无效的分块流。

using (StreamReader sr = new StreamReader(response.GetResponseStream(), Encoding.UTF8))
{
    StringBuilder sb = new StringBuilder();

    try
    {
        while (!sr.EndOfStream)
        {
            sb.Append((char)sr.Read());
        }
    }
    catch (System.IO.IOException)
    { }

    string content = sb.ToString();
}

回答by user2186152

I am working on a similar problem. The .net HttpWebRequest and HttpWebRequest handle cookies and redirects automatically but they do not handle chunked content on the response body automatically.

我正在处理类似的问题。.net HttpWebRequest 和 HttpWebRequest 会自动处理 cookie 和重定向,但它们不会自动处理响应主体上的分块内容。

This is perhaps because chunked content may contain more than simple data (i.e.: chunk names, trailing headers).

这可能是因为分块内容可能包含的不仅仅是简单的数据(即:块名称、尾随标题)。

Simply reading the stream and ignoring the EOF exception will not work as the stream contains more than the desired content. The stream will contain chunks and each chunk begins by declaring its size. If the stream is simply read from beginning to end the final data will contain the chunk meta-data (and in case where it is gziped content it will fail the CRC check when decompressing).

简单地读取流并忽略 EOF 异常是行不通的,因为流包含的内容多于所需的内容。流将包含块,每个块以声明其大小开始。如果流只是从头到尾读取,则最终数据将包含块元数据(如果是 gzip 内容,则在解压缩时将无法通过 CRC 检查)。

To solve the problem it is necessary to manually parse the stream, removing the chunk size from each chunk (as well as the CR LF delimitors), detecting the final chunk and keeping only the chunk data. There likely is a library out there somewhere that does this, I have not found it yet.

为了解决这个问题,必须手动解析流,从每个块(以及 CR LF 分隔符)中删除块大小,检测最终块并仅保留块数据。可能有一个图书馆可以做到这一点,我还没有找到。

Usefull resources :

有用的资源:

http://en.wikipedia.org/wiki/Chunked_transfer_encodinghttp://tools.ietf.org/html/rfc2616#section-3.6.1

http://en.wikipedia.org/wiki/Chunked_transfer_encoding http://tools.ietf.org/html/rfc2616#section-3.6.1

回答by Steven Craft

After trying a lot of snippets from StackOverflow and Google, ultimately I found this to work the best (assuming you know the data a UTF8 string, if not, you can just keep the byte array and process appropriately):

在尝试了来自 StackOverflow 和 Google 的大量代码片段之后,最终我发现这是最好的(假设您知道数据是 UTF8 字符串,如果没有,您可以保留字节数组并进行适当处理):

byte[] data;
var responseStream = response.GetResponseStream();
var reader = new StreamReader(responseStream, Encoding.UTF8);
data = Encoding.UTF8.GetBytes(reader.ReadToEnd());
return Encoding.Default.GetString(data.ToArray());

I found other variations work most of the time, but occasionally truncate the data. I got this snippet from:

我发现其他变体大部分时间都有效,但偶尔会截断数据。我得到了这个片段:

https://social.msdn.microsoft.com/Forums/en-US/4f28d99d-9794-434b-8b78-7f9245c099c4/problems-with-httpwebrequest-and-transferencoding-chunked?forum=ncl

https://social.msdn.microsoft.com/Forums/en-US/4f28d99d-9794-434b-8b78-7f9245c099c4/problems-with-httpwebrequest-and-transferencoding-chunked?forum=ncl