如何在 Linux 上设置最大 TCP 最大段大小?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/3857892/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to set the maximum TCP Maximum Segment Size on Linux?
提问by Eric
In Linux, how do you set the maximum segment size that is allowed on a TCP connection? I need to set this for an application I did not write (so I cannot use setsockopt
to do it). I need to set this ABOVE the mtu in the network stack.
在 Linux 中,如何设置 TCP 连接允许的最大段大小?我需要为一个我没有写的应用程序设置这个(所以我不能setsockopt
用来做)。我需要将其设置在网络堆栈中的 mtu 之上。
I have two streams sharing the same network connection. One sends small packets periodically, which need absolute minimum latency. The other sends tons of data--I am using SCP to simulate that link.
我有两个流共享相同的网络连接。一个周期性地发送小数据包,这需要绝对最小的延迟。另一个发送大量数据——我正在使用 SCP 来模拟该链接。
I have setup traffic control (tc) to give the minimum latency traffic high priority. The problem I am running into, though, is that the TCP packets that are coming down from SCP end up with sizes up to 64K bytes. Yes, these are broken into smaller packets based on mtu, but this unfortunately occurs AFTER tc prioritizes the packets. Thus, my low latency packet gets stuck behind up to 64K bytes of SCP traffic.
我已经设置了流量控制 (tc) 以给予最小延迟流量高优先级。不过,我遇到的问题是,从 SCP 传下来的 TCP 数据包的大小高达 64K 字节。是的,这些基于 mtu 被分解为更小的数据包,但不幸的是,这发生在 tc 对数据包进行优先级排序之后。因此,我的低延迟数据包被卡在多达 64K 字节的 SCP 流量后面。
This articleindicates that on Windows you can set this value.
本文指出在 Windows 上您可以设置此值。
Is there something on Linux I can set? I've tried ip route and iptables, but these are applied too low in the network stack. I need to limit the TCP packet size before tc, so it can prioritize the high priority packets appropriately.
Linux 上有什么我可以设置的吗?我已经尝试过 ip route 和 iptables,但是这些在网络堆栈中的应用太低了。我需要在 tc 之前限制 TCP 数据包大小,以便它可以适当地优先处理高优先级数据包。
回答by caf
The upper bound of the advertised TCP MSS is the MTU of the first hop route. If you're seeing 64k segments, that tends to indicate that the first hop route MTU is excessively large - are you using loopback or something for testing?
通告的 TCP MSS 的上限是第一跳路由的 MTU。如果您看到 64k 段,这往往表明第一跳路由 MTU 过大 - 您是否使用环回或其他东西进行测试?
回答by apenwarr
You are definitely misdiagnosing the problem; as someone else pointed out, tc doesn't see TCP packets, it sees IP packets, and they'd already be in chunks at that point.
你肯定是误诊了问题;正如其他人指出的那样,tc 没有看到 TCP 数据包,它看到的是 IP 数据包,并且那时它们已经成块了。
You are probably just experiencing bufferbloat: you're overloading your outbound queue in a totally separate device (probably a DSL modem or cable modem). The only fix is to tell tc to limit your outbound bandwidth to less than the modem's bandwidth, eg. using TBF.
您可能只是遇到了缓冲膨胀:您在一个完全独立的设备(可能是 DSL 调制解调器或电缆调制解调器)中使出站队列过载。唯一的解决方法是告诉 tc 将您的出站带宽限制为小于调制解调器的带宽,例如。使用TBF。
回答by CitizenB
Are you using tcp segmentation offload to the nic? (You can use "ethtool -k $your_network_device" to see the offload settings.) This is the only way as far as I know that you would see 64k tcp packets with a device MTU of 1500. Not that this answers the question, but it might help avoid misdiagnosis.
您是否使用 tcp 分段卸载到网卡?(您可以使用“ethtool -k $your_network_device”来查看卸载设置。)据我所知,这是唯一的方法,您会看到设备 MTU 为 1500 的 64k tcp 数据包。不是说这回答了问题,而是它可能有助于避免误诊。
回答by rashok
ip route
command with option advmss
helps to set MSS
value.
ip route
带选项的命令advmss
有助于设置MSS
值。
ip route add 192.168.1.0/24 dev eth0 advmss 1500
回答by LinconFive
MSS = MTU – 40bytes (standard TCP/IP overhead of 40 bytes [20+20])
MSS = MTU – 40 字节(标准 TCP/IP 开销为 40 字节 [20+20])
If the MTU is 1500 bytes then the MSS will be 1460 bytes.
如果 MTU 为 1500 字节,则 MSS 将为 1460 字节。