C++ boost::asio 完全断开连接

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1993216/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-27 21:46:52  来源:igfitidea点击:

boost::asio cleanly disconnecting

c++socketstcpnetwork-programmingboost-asio

提问by Fire Lancer

Sometimes boost::asio seems to disconnect before I want it to, i.e. before the server properly handles the disconnect. I'm not sure how this is possible because the client seems to think its fully sent the message, yet when the server emits the error its not even read the message header... During testing this only happens maybe 1 in 5 times, the server receives the client shut down message, and disconnects the client cleanly.

有时 boost::asio 似乎在我想要它之前断开连接,即在服务器正确处理断开连接之前。我不确定这是怎么可能的,因为客户端似乎认为它完全发送了消息,但是当服务器发出错误时,它甚至没有读取消息头......在测试过程中,这种情况可能只有五分之一发生,服务器收到客户端关闭消息,并干净地断开客户端连接。

The error: "An existing connection was forcibly closed by the remote host"

错误:“远程主机强行关闭了现有连接”

The client disconnecting:

客户端断开连接:

void disconnect()
{
    boost::system::error_code error;
    //just creates a simple buffer with a shutdown header
    boost::uint8_t *packet = createPacket(PC_SHUTDOWN,0);
    //sends it
    if(!sendBlocking(socket,packet,&error))
    {
        //didnt get here in my tests, so its not that the write failed...
        logWrite(LOG_ERROR,"server",
            std::string("Error sending shutdown message.\n")
            + boost::system::system_error(error).what());
    }

    //actaully disconnect
    socket.close();
    ioService.stop();
}
bool sendBlocking(boost::asio::ip::tcp::socket &socket,
    boost::uint8_t *data, boost::system::error_code* error)
{
    //get the length section from the message
    boost::uint16_t len = *(boost::uint16_t*)(data - 3);
    //send it
    asio::write(socket, asio::buffer(data-3,len+3),
        asio::transfer_all(), *error);
    deletePacket(data);
    return !(*error);
}

The server:

服务器:

void Client::clientShutdown()
{
    //not getting here in problem cases
    disconnect();
}
void Client::packetHandler(boost::uint8_t type, boost::uint8_t *data,
    boost::uint16_t len, const boost::system::error_code& error)
{
    if(error)
    {
        //error handled here
        delete[] data;
        std::stringstream ss;
        ss << "Error recieving packet.\n";
        ss << logInfo() << "\n";
        ss << "Error: " << boost::system::system_error(error).what();
        logWrite(LOG_ERROR,"Client",ss.str());

        disconnect();
    }
    else
    {
        //call handlers based on type, most will then call startRead when
        //done to get the next packet. Note however, that clientShutdown
        //does not
        ...
    }
}



void startRead(boost::asio::ip::tcp::socket &socket, PacketHandler handler)
{
    boost::uint8_t *header = new boost::uint8_t[3];
    boost::asio::async_read(socket,boost::asio::buffer(header,3),
        boost::bind(&handleReadHeader,&socket,handler,header, 
        boost::asio::placeholders::bytes_transferred,boost::asio::placeholders::error));
}
void handleReadHeader(boost::asio::ip::tcp::socket *socket, PacketHandler handler,
    boost::uint8_t *header, size_t len, const boost::system::error_code& error)
{
    if(error)
    {
        //error "thrown" here, len always = 0 in problem cases...
        delete[] header;
        handler(0,0,0,error);
    }
    else
    {
        assert(len == 3);
        boost::uint16_t payLoadLen  = *((boost::uint16_t*)(header + 0));
        boost::uint8_t  type        = *((boost::uint8_t*) (header + 2));
        delete[] header;
        boost::uint8_t *payLoad = new boost::uint8_t[payLoadLen];

        boost::asio::async_read(*socket,boost::asio::buffer(payLoad,payLoadLen),
            boost::bind(&handleReadBody,socket,handler,
            type,payLoad,payLoadLen,
            boost::asio::placeholders::bytes_transferred,boost::asio::placeholders::error));
    }
}
void handleReadBody(ip::tcp::socket *socket, PacketHandler handler,
    boost::uint8_t type, boost::uint8_t *payLoad, boost::uint16_t len,
    size_t readLen, const boost::system::error_code& error)
{
    if(error)
    {
        delete[] payLoad;
        handler(0,0,0,error);
    }
    else
    {
        assert(len == readLen);
        handler(type,payLoad,len,error);
        //delete[] payLoad;
    }
}

采纳答案by GrahamS

I think you should probably have a call to socket.shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec)in there before the call to socket.close().

我认为您可能应该socket.shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec)在调用socket.close().

The boost::asio documentation for basic_stream_socket::closestates:

basic_stream_socket::closeboost::asio 文档指出:

For portable behaviour with respect to graceful closure of a connected socket, call shutdown() before closing the socket.

对于与优雅关闭连接的套接字有关的可移植行为,请在关闭套接字之前调用 shutdown()。

This should ensure that any pending operations on the socket are properly cancelled and any buffers are flushed prior to the call to socket.close.

这应该确保在调用 socket.close 之前正确取消套接字上的任何挂起操作并刷新所有缓冲区。

回答by William Symionow

I have tried to do this with both the close() method and the shutdown() method

我试图用 close() 方法和 shutdown() 方法来做到这一点

socket.shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec)

The shutdown method is the best of the two. However, I find that using the destructor of the ASIO socket is the clean way to do it as ASIO takes care of it all for you. So your goal is to just let the socket fall out of scope. Now, you can do this easily using a shared_ptr and resetting the shared_ptr to a fresh socket or null. this will call the destructor of the ASIO socket and life is good.

关机方法是两者中最好的。但是,我发现使用 ASIO 套接字的析构函数是一种干净的方法,因为 ASIO 会为您处理所有事情。所以你的目标是让套接字超出范围。现在,您可以使用 shared_ptr 轻松完成此操作,并将 shared_ptr 重置为新的套接字或 null。这将调用 ASIO 套接字的析构函数,生活很好。

回答by Chris H

Maybe this is what is happening:

也许这就是正在发生的事情:

  • Client send disconnect packet
  • Client shuts socket down
  • Server read handler gets called, but there is an error associated with the shutdown packet because the socket is already closed.
  • 客户端发送断开数据包
  • 客户端关闭套接字
  • 服务器读取处理程序被调用,但由于套接字已关闭,因此存在与关闭数据包相关的错误。

I see in your read handlers, if there is an error, you never check to see if your shutdown packet is there. Maybe it is. Basically what I'm saying is maybe your client sometimes is able to send both the close and the shutdown packet before the server has a chance to process them separately.

我在您的读取处理程序中看到,如果出现错误,您永远不会检查您的关闭数据包是否在那里。也许是。基本上我要说的是,您的客户端有时能够在服务器有机会分别处理它们之前发送关闭和关闭数据包。

回答by flamemyst

Use async_write() and put socket.close() inside of write handler. This will make sure packet is processed by boost asio and not neglected in the middle of processing (because of close() calls).

使用 async_write() 并将 socket.close() 放在写入处理程序中。这将确保数据包由 boost asio 处理,并且在处理过程中不会被忽略(因为 close() 调用)。

回答by Jay

I have a very similar issue. I believe it's related to Windows recycling connections. Is the following familiar?

我有一个非常相似的问题。我相信这与 Windows 回收连接有关。下面是不是很熟悉?

  • you get this error immediately upon starting the program but not after a connection is established?
  • The error never happens if you wait more than 4 minutes before restarting your application?
  • 您在启动程序后立即收到此错误,但在建立连接后没有收到此错误?
  • 如果在重新启动应用程序之前等待超过 4 分钟,错误就不会发生?

The tcp specs specify that by default it should wait four minutes for the final acknowledgment when a tcp connection is closed. You can see these connections in FIN_WAIT state using netstat. The Windows OS detects when you try to connect to the exact same system and takes these partially closed connections and recycles them. Your second invocation of the program gets the 'closed' connection left behind by the first run. It gets the next acknowledge and then really closes.

tcp 规范指定默认情况下,当 tcp 连接关闭时,它应等待四分钟以进行最终确认。您可以使用 netstat 在 FIN_WAIT 状态下查看这些连接。Windows 操作系统会检测您何时尝试连接到完全相同的系统,并采用这些部分关闭的连接并回收它们。您对程序的第二次调用获得了第一次运行留下的“关闭”连接。它得到下一个确认,然后真正关闭。