在 Linux 上模拟延迟和丢弃的数据包
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/614795/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Simulate delayed and dropped packets on Linux
提问by
I would like to simulate packet delay and loss for UDP
and TCP
on Linux to measure the performance of an application. Is there a simple way to do this?
我想模拟LinuxUDP
和TCP
Linux 上的数据包延迟和丢失,以测量应用程序的性能。有没有一种简单的方法可以做到这一点?
回答by unwind
Haven't tried it myself, but this pagehas a list of plugin modules that run in Linux' built in iptables IP filtering system. One of the modules is called "nth", and allows you to set up a rule that will drop a configurable rate of the packets. Might be a good place to start, at least.
我自己没有尝试过,但是这个页面有一个在 Linux 内置的 iptables IP 过滤系统中运行的插件模块列表。其中一个模块称为“nth”,它允许您设置一个规则,该规则将丢弃可配置的数据包速率。至少可能是一个不错的起点。
回答by Judge Maygarden
This tutorial on networking physics simulationscontains a C++ class in the sample codefor simulating latency and packet loss in a UDP connection and may be of guidance. See the public latencyand packetLossvariables of the Connectionclass found in the Connection.hfile of the downloadable source code.
本网络物理模拟教程在示例代码中包含一个 C++ 类,用于模拟 UDP 连接中的延迟和数据包丢失,可能具有指导意义。请参阅可下载源代码的Connection.h文件中的Connection类的公共延迟和packetLoss变量。
回答by hillu
iptables(8) has a statistic match module that can be used to match every nth packet. To drop this packet, just append -j DROP.
iptables(8) 有一个统计匹配模块,可以用来匹配每个第 n 个数据包。要丢弃此数据包,只需附加-j DROP。
回答by Mark
回答by ephemient
netemleverages functionality already built into Linux and userspace utilities to simulate networks. This is actually what Mark's answer refers to, by a different name.
netem利用已内置于 Linux 和用户空间实用程序中的功能来模拟网络。这实际上是马克的回答所指的,用不同的名字。
The examples on their homepagealready show how you can achieve what you've asked for:
他们主页上的示例已经展示了如何实现您的要求:
Examples
Emulating wide area network delays
This is the simplest example, it just adds a fixed amount of delay to all packets going out of the local Ethernet.
# tc qdisc add dev eth0 root netem delay 100ms
Now a simple ping test to host on the local network should show an increase of 100 milliseconds. The delay is limited by the clock resolution of the kernel (Hz). On most 2.4 systems, the system clock runs at 100?Hz which allows delays in increments of 10?ms. On 2.6, the value is a configuration parameter from 1000 to 100?Hz.
Later examples just change parameters without reloading the qdisc
Real wide area networks show variability so it is possible to add random variation.
# tc qdisc change dev eth0 root netem delay 100ms 10ms
This causes the added delay to be 100 ± 10?ms. Network delay variation isn't purely random, so to emulate that there is a correlation value as well.
# tc qdisc change dev eth0 root netem delay 100ms 10ms 25%
This causes the added delay to be 100 ± 10?ms with the next random element depending 25% on the last one. This isn't true statistical correlation, but an approximation.
Delay distribution
Typically, the delay in a network is not uniform. It is more common to use a something like a normal distribution to describe the variation in delay. The netem discipline can take a table to specify a non-uniform distribution.
# tc qdisc change dev eth0 root netem delay 100ms 20ms distribution normal
The actual tables (normal, pareto, paretonormal) are generated as part of the iproute2 compilation and placed in /usr/lib/tc; so it is possible with some effort to make your own distribution based on experimental data.
Packet loss
Random packet loss is specified in the 'tc' command in percent. The smallest possible non-zero value is:
2?32= 0.0000000232%
# tc qdisc change dev eth0 root netem loss 0.1%
This causes 1/10th of a percent (i.e. 1 out of 1000) packets to be randomly dropped.
An optional correlation may also be added. This causes the random number generator to be less random and can be used to emulate packet burst losses.
# tc qdisc change dev eth0 root netem loss 0.3% 25%
This will cause 0.3% of packets to be lost, and each successive probability depends by a quarter on the last one.
Probn= 0.25 × Probn-1+ 0.75 × Random
例子
模拟广域网延迟
这是最简单的例子,它只是为所有离开本地以太网的数据包增加了固定量的延迟。
# tc qdisc add dev eth0 root netem delay 100ms
现在对本地网络上的主机进行简单的 ping 测试应该显示增加了 100 毫秒。延迟受内核时钟分辨率 (Hz) 的限制。在大多数 2.4 系统上,系统时钟运行在 100?Hz,允许以 10?ms 为增量延迟。在 2.6 中,该值是一个从 1000 到 100?Hz 的配置参数。
后面的例子只是改变参数而不重新加载 qdisc
真正的广域网表现出可变性,因此可以添加随机变化。
# tc qdisc change dev eth0 root netem delay 100ms 10ms
这导致附加延迟为 100 ± 10?ms。网络延迟变化不是完全随机的,因此模拟也存在相关值。
# tc qdisc change dev eth0 root netem delay 100ms 10ms 25%
这导致附加延迟为 100 ± 10?ms,下一个随机元素依赖于最后一个元素的 25%。这不是真正的统计相关性,而是一种近似值。
延迟分配
通常,网络中的延迟是不均匀的。更常见的是使用像正态分布这样的东西来描述延迟的变化。netem 学科可以使用表格来指定非均匀分布。
# tc qdisc change dev eth0 root netem delay 100ms 20ms distribution normal
实际表(normal、pareto、paretonormal)作为iproute2编译的一部分生成并放置在/usr/lib/tc中;因此,可以通过一些努力根据实验数据制作自己的分布。
数据包丢失
'tc' 命令中以百分比指定随机丢包率。最小可能的非零值是:
2 ?32= 0.0000000232%
# tc qdisc change dev eth0 root netem loss 0.1%
这会导致 1/10%(即 1000 个中的 1 个)数据包被随机丢弃。
还可以添加可选的相关性。这会导致随机数生成器的随机性降低,可用于模拟数据包突发丢失。
# tc qdisc change dev eth0 root netem loss 0.3% 25%
这将导致 0.3% 的数据包丢失,并且每个连续概率取决于最后一个的四分之一。
概率n= 0.25 × 概率n-1+ 0.75 × 随机
Notethat you should use tc qdisc add
if you have no rules for that interface or tc qdisc change
if you already have rules for that interface. Attempting to use tc qdisc change
on an interface with no rules will give the error RTNETLINK answers: No such file or directory
.
请注意,tc qdisc add
如果您没有该接口的规则,或者tc qdisc change
您已经有该接口的规则,则应该使用。尝试tc qdisc change
在没有规则的界面上使用会出现错误RTNETLINK answers: No such file or directory
。
回答by Bjarke Freund-Hansen
For dropped packets I would simply use iptables and the statistic module.
对于丢弃的数据包,我会简单地使用 iptables 和statistic 模块。
iptables -A INPUT -m statistic --mode random --probability 0.01 -j DROP
Above will drop an incoming packet with a 1% probability. Be careful, anything above about 0.14 and most of you tcp connections will most likely stall completely.
以上将以 1% 的概率丢弃传入的数据包。小心,任何高于 0.14 的值,大多数 tcp 连接很可能会完全停止。
Take a look at man iptables and search for "statistic" for more information.
查看 man iptables 并搜索“statistic”以获取更多信息。
回答by Elalfer
You can try http://snad.ncsl.nist.gov/nistnet/It's quite old NIST project (last release 2005), but it works for me.
您可以尝试http://snad.ncsl.nist.gov/nistnet/这是一个相当古老的 NIST 项目(2005 年最新版本),但它对我有用。
回答by Alex Giotis
An easy to use network fault injection tool is Saboteur. It can simulate:
一个易于使用的网络故障注入工具是Saboteur。它可以模拟:
- Total network partition
- Remote service dead (not listening on the expected port)
- Delays
- Packet loss -TCP connection timeout (as often happens when two systems are separated by a stateful firewall)
- 总网络分区
- 远程服务已死(未侦听预期端口)
- 延误
- 数据包丢失 - TCP 连接超时(当两个系统被有状态防火墙分隔时经常发生)
回答by gaetano
One of the most used tool in the scientific community to that purpose is DummyNet. Once you have installed the ipfw
kernel module, in order to introduce 50ms propagation delay between 2 machines simply run these commands:
科学界为此目的最常用的工具之一是DummyNet。安装ipfw
内核模块后,为了在两台机器之间引入 50 毫秒的传播延迟,只需运行以下命令:
./ipfw pipe 1 config delay 50ms
./ipfw add 1000 pipe 1 ip from $IP_MACHINE_1 to $IP_MACHINE_2
In order to also introduce 50% of packet losses you have to run:
为了还引入 50% 的数据包丢失,您必须运行:
./ipfw pipe 1 config plr 0.5
Heremore details.
这里有更多细节。