Linux 套接字 recv() 挂在带有 MSG_WAITALL 的大消息上

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/8470403/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-06 03:35:39  来源:igfitidea点击:

Socket recv() hang on large message with MSG_WAITALL

clinuxsocketsnetworkingtcp

提问by Shane Carr

I have an application that reads large files from a server and hangs frequently on a particular machine. It has worked successfully under RHEL5.2 for a long time. We have recently upgraded to RHEL6.1 and it now hangs regularly.

我有一个从服务器读取大文件并经常挂在特定机器上的应用程序。它在RHEL5.2下已经成功运行了很长时间。我们最近升级到 RHEL6.1,现在它经常挂起。

I have created a test app that reproduces the problem. It hangs approx 98 times out of 100.

我创建了一个重现问题的测试应用程序。它挂起大约 100 次中的 98 次。

#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/param.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
#include <netdb.h>
#include <sys/socket.h>
#include <sys/time.h>

int mFD = 0;

void open_socket()
{
  struct addrinfo hints, *res;
  memset(&hints, 0, sizeof(hints));
  hints.ai_socktype = SOCK_STREAM;
  hints.ai_family = AF_INET;

  if (getaddrinfo("localhost", "60000", &hints, &res) != 0)
  {
    fprintf(stderr, "Exit %d\n", __LINE__);
    exit(1);
  }

  mFD = socket(res->ai_family, res->ai_socktype, res->ai_protocol);

  if (mFD == -1)
  {
    fprintf(stderr, "Exit %d\n", __LINE__);
    exit(1);
  }

  if (connect(mFD, res->ai_addr, res->ai_addrlen) < 0)
  {
    fprintf(stderr, "Exit %d\n", __LINE__);
    exit(1);
  }

  freeaddrinfo(res);
}

void read_message(int size, void* data)
{
  int bytesLeft = size;
  int numRd = 0;

  while (bytesLeft != 0)
  {
    fprintf(stderr, "reading %d bytes\n", bytesLeft);

    /* Replacing MSG_WAITALL with 0 works fine */
    int num = recv(mFD, data, bytesLeft, MSG_WAITALL);

    if (num == 0)
    {
      break;
    }
    else if (num < 0 && errno != EINTR)
    {
      fprintf(stderr, "Exit %d\n", __LINE__);
      exit(1);
    }
    else if (num > 0)
    {
      numRd += num;
      data += num;
      bytesLeft -= num;
      fprintf(stderr, "read %d bytes - remaining = %d\n", num, bytesLeft);
    }
  }

  fprintf(stderr, "read total of %d bytes\n", numRd);
}

int main(int argc, char **argv)
{
  open_socket();

  uint32_t raw_len = atoi(argv[1]);
  char raw[raw_len];

  read_message(raw_len, raw);

  return 0;
}

Some notes from my testing:

我测试的一些注意事项:

  • If "localhost" maps to the loopback address 127.0.0.1, the app hangs on the call to recv() and NEVER returns.
  • If "localhost" maps to the ip of the machine, thus routing the packets via the ethernet interface, the app completes successfully.
  • When I experience a hang, the server sends a "TCP Window Full" message, and the client responds with a "TCP ZeroWindow" message (see image and attached tcpdump capture). From this point, it hangs forever with the server sending keep-alives and the client sending ZeroWindow messages. The client never seems to expand its window, allowing the transfer to complete.
  • During the hang, if I examine the output of "netstat -a", there is data in the servers send queue but the clients receive queue is empty.
  • If I remove the MSG_WAITALL flag from the recv() call, the app completes successfully.
  • The hanging issue only arises using the loopback interface on 1 particular machine. I suspect this may all be related to timing dependencies.
  • As I drop the size of the 'file', the likelihood of the hang occurring is reduced
  • 如果“localhost”映射到环回地址 127.0.0.1,则应用程序挂起对 recv() 的调用并且永远不会返回。
  • 如果“localhost”映射到机器的 ip,从而通过以太网接口路由数据包,则应用程序成功完成。
  • 当我遇到挂起时,服务器发送“TCP Window Full”消息,客户端以“TCP ZeroWindow”消息响应(参见图像和附加的 tcpdump 捕获)。从这一点开始,它永远挂起,服务器发送保持活动状态,客户端发送 ZeroWindow 消息。客户端似乎永远不会扩展其窗口,从而允许传输完成。
  • 在挂起期间,如果我检查“netstat -a”的输出,服务器发送队列中有数据,但客户端接收队列为空。
  • 如果我从 recv() 调用中删除 MSG_WAITALL 标志,应用程序将成功完成。
  • 挂起问题仅在使用 1 台特定机器上的环回接口时出现。我怀疑这可能都与时序依赖性有关。
  • 当我减小“文件”的大小时,发生挂起的可能性就会降低

The source for the test app can be found here:

可以在此处找到测试应用程序的来源:

Socket test source

插座测试源

The tcpdump capture from the loopback interface can be found here:

可以在此处找到来自环回接口的 tcpdump 捕获:

tcpdump capture

tcpdump 捕获

I reproduce the issue by issuing the following commands:

我通过发出以下命令来重现该问题:

>  gcc socket_test.c -o socket_test
>  perl -e 'for (1..6000000){ print "a" }' | nc -l 60000
>  ./socket_test 6000000

This sees 6000000 bytes sent to the test app which tries to read the data using a single call to recv().

这会看到 6000000 字节发送到测试应用程序,该应用程序尝试使用对 recv() 的单个调用读取数据。

I would love to hear any suggestions on what I might be doing wrong or any further ways to debug the issue.

我很想听听任何关于我可能做错了什么的建议或任何进一步调试问题的方法。

采纳答案by Some programmer dude

MSG_WAITALLshouldblock until all data has been received. From the manual page on recv:

MSG_WAITALL应该阻塞直到接收到所有数据。从recv手册页

This flag requests that the operation block until the full request is satisfied.

此标志请求操作阻塞,直到满足完整请求。

However, the buffers in the network stack probably are not large enough to contain everything, which is the reason for the error messages on the server. The client network stack simply can't hold that much data.

但是,网络堆栈中的缓冲区可能不足以包含所有内容,这就是服务器上出现错误消息的原因。客户端网络堆栈根本无法容纳那么多数据。

The solution is either to increase the buffer sizes (SO_RCVBUFoption to setsockopt), split the message into smaller pieces, or receiving smaller chunks putting it into your own buffer. The last is what I would recommend.

解决方案是增加缓冲区大小(SO_RCVBUF选项为setsockopt),将消息拆分为较小的部分,或者接收较小的块并将其放入您自己的缓冲区中。最后是我要推荐的。

Edit:I see in your code that you already do what I suggested (read smaller chunks with own buffering,) so just remove the MSG_WAITALLflag and it should work.

编辑:我在您的代码中看到您已经按照我的建议执行了操作(使用自己的缓冲读取较小的块),因此只需删除MSG_WAITALL标志即可。

Oh, and when recvreturns zero, that means the other end have closed the connection, and that you should do it too.

哦,当recv返回零时,这意味着另一端已经关闭了连接,你也应该这样做。

回答by David Schwartz

Consider these two possible rules:

考虑这两个可能的规则:

  1. The receiver may wait for the sender to send more before receiving what has already been sent.

  2. The sender may wait for the receiver to receive what has already been sent before sending more.

  1. 接收者可能会等待发送者发送更多信息,然后才能接收到已经发送的内容。

  2. 发送方可能会等待接收方收到已经发送的内容,然后再发送更多内容。

We can have either of these rules, but we cannot have both of these rules.

我们可以拥有这两个规则中的任何一个,但我们不能同时拥有这两个规则。

Why? Because if the receiver is permitted to wait for the sender, that means the sender cannot wait for the receiver to receive before sending more, otherwise we deadlock. And if the sender is permitted to wait for the receiver, that means the receiver cannot wait for the sender to send before receiving more, otherwise we deadlock.

为什么?因为如果允许接收者等待发送者,那意味着发送者不能等待接收者接收到再发送更多,否则我们会死锁。如果允许发送者等待接收者,则意味着接收者不能等待发送者发送更多信息,否则我们会死锁。

If both of these things happen at the same time, we deadlock. The sender will not send more until the receiver receives what has already been sent, and the receiver will not receive what has already been sent unless the sender send more. Boom.

如果这两件事同时发生,我们就会陷入僵局。在接收者收到已经发送的内容之前,发送者不会发送更多,除非发送者发送更多内容,否则接收者不会收到已经发送的内容。繁荣。

TCP chooses rule 2 (for reasons that should be obvious). Thus it cannotsupport rule 1. But in your code, you are the receiver, and you are waiting for the sender to send more before you receive what has already been sent. So this will deadlock.

TCP 选择规则 2(出于显而易见的原因)。因此它不能支持规则 1。但是在您的代码中,您是接收者,并且在您收到已发送的内容之前,您正在等待发送者发送更多信息。所以这将陷入僵局。