Linux IPC 性能:命名管道与套接字

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1235958/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-03 17:35:37  来源:igfitidea点击:

IPC performance: Named Pipe vs Socket

linuxperformancesocketsipcnamed-pipes

提问by user19745

Everyone seems to say named pipes are faster than sockets IPC. How much faster are they? I would prefer to use sockets because they can do two-way communication and are very flexible but will choose speed over flexibility if it is by considerable amount.

每个人似乎都说命名管道比套接字 IPC 更快。它们的速度有多快?我更喜欢使用套接字,因为它们可以进行双向通信并且非常灵活,但如果数量很大,则会选择速度而不是灵活性。

回答by MarkR

Named pipes and sockets are not functionally equivalent; sockets provide more features (they are bidirectional, for a start).

命名管道和套接字在功能上并不等效;套接字提供了更多功能(首先它们是双向的)。

We cannot tell you which will perform better, but I strongly suspect it doesn't matter.

我们无法告诉您哪个会表现更好,但我强烈怀疑这无关紧要。

Unix domain sockets will do pretty much what tcp sockets will, but only on the local machine and with (perhaps a bit) lower overhead.

Unix 域套接字几乎可以完成 tcp 套接字的作用,但仅在本地机器上并且(可能有点)较低的开销。

If a Unix socket isn't fast enough and you're transferring a lot of data, consider using shared memory between your client and server (which is a LOT more complicated to set up).

如果 Unix 套接字不够快并且您要传输大量数据,请考虑在客户端和服务器之间使用共享内存(设置起来要复杂得多)。

Unix and NT both have "Named pipes" but they are totally different in feature set.

Unix 和 NT 都有“命名管道”,但它们的功能集完全不同。

回答by shodanex

I would suggest you take the easy path first, carefully isolating the IPC mechanism so that you can change from socket to pipe, but I would definitely go with socket first. You should be sure IPC performance is a problem before preemptively optimizing.

我建议您首先采用简单的方法,仔细隔离 IPC 机制,以便您可以从套接字更改为管道,但我肯定会先使用套接字。在抢先优化之前,您应该确定 IPC 性能是一个问题。

And if you get in trouble because of IPC speed, I think you should consider switching to shared memory rather than going to pipe.

如果您因为 IPC 速度而遇到麻烦,我认为您应该考虑切换到共享内存而不是使用管道。

If you want to do some transfer speed testing, you should try socat, which is a very versatile program that allows you to create almost any kind of tunnel.

如果你想做一些传输速度测试,你应该尝试socat,这是一个非常通用的程序,它允许你创建几乎任何类型的隧道。

回答by Tim Post

I'm going to agree with shodanex, it looks like you're prematurely trying to optimize something that isn't yet problematic. Unless you knowsockets are going to be a bottleneck, I'd just use them.

我会同意 shodanex,看起来您过早地尝试优化尚无问题的东西。除非您知道套接字将成为瓶颈,否则我只会使用它们。

A lot of people who swear by named pipes find a little savings (depending on how well everything else is written), but end up with code that spends more time blocking for an IPC reply than it does doing useful work. Sure, non-blocking schemes help this, but those can be tricky. Spending years bringing old code into the modern age, I can say, the speedup is almost nil in the majority of cases I've seen.

许多对命名管道发誓的人会发现一些节省(取决于其他所有内容的编写情况),但最终代码会花费更多的时间来阻止 IPC 回复而不是做有用的工作。当然,非阻塞方案对此有所帮助,但这些可能很棘手。花费数年时间将旧代码带入现代,我可以说,在我见过的大多数情况下,加速几乎为零。

If you really think that sockets are going to slow you down, then go out of the gate using shared memory with careful attention to how you use locks. Again, in all actuality, you might find a small speedup, but notice that you're wasting a portion of it waiting on mutual exclusion locks. I'm not going to advocate a trip to futex hell(well, not quitehell anymore in 2015, depending upon your experience).

如果您真的认为套接字会减慢您的速度,那么请使用共享内存走出大门,并仔细注意您如何使用锁。同样,实际上,您可能会发现一个小的加速,但请注意,您正在浪费一部分等待互斥锁。我不会主张一趟futex的地狱(当然,不是在2015年的地狱了,这取决于你的经验)。

Pound for pound, sockets are (almost) always the best way to go for user space IPC under a monolithic kernel .. and (usually) the easiest to debug and maintain.

一磅一磅,套接字(几乎)总是在单体内核下进行用户空间 IPC 的最佳方式......并且(通常)最容易调试和维护。

回答by Damien

If you do not need speed, sockets are the easiest way to go!

如果您不需要速度,套接字是最简单的方法!

If what you are looking at is speed, the fastest solution is shared Memory, not named pipes.

如果您关注的是速度,最快的解决方案是共享内存,而不是命名管道。

回答by daghan

For two way communication with named pipes:

对于与命名管道的双向通信:

  • If you have few processes, you can open two pipes for two directions (processA2ProcessB and processB2ProcessA)
  • If you have many processes, you can open in and out pipes for every process (processAin, processAout, processBin, processBout, processCin, processCout etc)
  • Or you can go hybrid as always :)
  • 如果你的进程很少,你可以打开两个方向的两个管道(processA2ProcessB和processB2ProcessA)
  • 如果您有许多进程,您可以为每个进程打开进出管道(processAin、processAout、processBin、processBout、processCin、processCout 等)
  • 或者你可以像往常一样混合:)

Named pipes are quite easy to implement.

命名管道很容易实现。

E.g. I implemented a project in C with named pipes, thanks to standart file input-output based communication (fopen, fprintf, fscanf ...) it was so easy and clean (if that is also a consideration).

例如,我使用命名管道在 C 中实现了一个项目,这要归功于基于标准文件输入-输出的通信(fopen、fprintf、fscanf ...),它是如此简单和干净(如果这也是一个考虑因素)。

I even coded them with java (I was serializing and sending objects over them!)

我什至用 java 编码它们(我正在序列化并通过它们发送对象!)

Named pipes has one disadvantage:

命名管道有一个缺点:

  • they do not scale on multiple computers like sockets since they rely on filesystem (assuming shared filesystem is not an option)
  • 它们不像套接字那样在多台计算机上扩展,因为它们依赖于文件系统(假设共享文件系统不是一个选项)

回答by Yuliy

Keep in mind that sockets does not necessarily mean IP (and TCP or UDP). You can also use UNIX sockets (PF_UNIX), which offer a noticeable performance improvement over connecting to 127.0.0.1

请记住,套接字并不一定意味着 IP(以及 TCP 或 UDP)。您还可以使用 UNIX 套接字 (PF_UNIX),与连接到 127.0.0.1 相比,它提供了显着的性能改进

回答by Hibou57

As often, numbers says more than feeling, here are some data: Pipe vs Unix Socket Performance (opendmx.net).

通常,数字比感觉更重要,这里有一些数据: 管道与 Unix 套接字性能 (opendmx.net)

This benchmark shows a difference of about 12 to 15% faster speed for pipes.

该基准测试显示,管道的速度提高了约 12% 到 15%。

回答by Lothar

One problem with sockets is that they do not have a way to flush the buffer. There is something called the Nagle algorithm which collects all data and flushes it after 40ms. So if it is responsiveness and not bandwidth you might be better off with a pipe.

套接字的一个问题是它们无法刷新缓冲区。有一种叫做 Nagle 算法的东西,它收集所有数据并在 40 毫秒后刷新它。因此,如果它是响应性而不是带宽,那么使用管道可能会更好。

You can disable the Nagle with the socket option TCP_NODELAY but then the reading end will never receive two short messages in one single read call.

您可以使用套接字选项 TCP_NODELAY 禁用 Nagle,但这样读取端将永远不会在一次读取调用中收到两条短消息。

So test it, i ended up with none of this and implemented memory mapped based queues with pthread mutex and semaphore in shared memory, avoiding a lot of kernel system calls (but today they aren't very slow anymore).

所以测试它,我最终没有这些,并在共享内存中使用 pthread 互斥锁和信号量实现了基于内存映射的队列,避免了大量内核系统调用(但今天它们不再很慢了)。

回答by Amit Vujic

You can use lightweight solution like ZeroMQ [ zmq/0mq]. It is very easy to use and dramatically faster then sockets.

您可以使用像 ZeroMQ [ zmq/0mq]这样的轻量级解决方案。它非常易于使用,并且比套接字快得多。

回答by chronoxor

Best results you'll get with Shared Memorysolution.

使用共享内存解决方案可获得最佳结果。

Named pipesare only 16% better than TCP sockets.

命名管道仅比TCP 套接字好 16% 。

Results are get with IPC benchmarking:

结果是通过IPC 基准测试获得的:

  • System: Linux (Linux ubuntu 4.4.0 x86_64 i7-6700K 4.00GHz)
  • Message: 128 bytes
  • Messages count: 1000000
  • 系统:Linux(Linux ubuntu 4.4.0 x86_64 i7-6700K 4.00GHz)
  • 消息:128 字节
  • 消息数:1000000

Pipe benchmark:

管道基准:

Message size:       128
Message count:      1000000
Total duration:     27367.454 ms
Average duration:   27.319 us
Minimum duration:   5.888 us
Maximum duration:   15763.712 us
Standard deviation: 26.664 us
Message rate:       36539 msg/s

FIFOs (named pipes) benchmark:

FIFO(命名管道)基准:

Message size:       128
Message count:      1000000
Total duration:     38100.093 ms
Average duration:   38.025 us
Minimum duration:   6.656 us
Maximum duration:   27415.040 us
Standard deviation: 91.614 us
Message rate:       26246 msg/s

Message Queue benchmark:

消息队列基准:

Message size:       128
Message count:      1000000
Total duration:     14723.159 ms
Average duration:   14.675 us
Minimum duration:   3.840 us
Maximum duration:   17437.184 us
Standard deviation: 53.615 us
Message rate:       67920 msg/s

Shared Memory benchmark:

共享内存基准:

Message size:       128
Message count:      1000000
Total duration:     261.650 ms
Average duration:   0.238 us
Minimum duration:   0.000 us
Maximum duration:   10092.032 us
Standard deviation: 22.095 us
Message rate:       3821893 msg/s

TCP sockets benchmark:

TCP 套接字基准测试:

Message size:       128
Message count:      1000000
Total duration:     44477.257 ms
Average duration:   44.391 us
Minimum duration:   11.520 us
Maximum duration:   15863.296 us
Standard deviation: 44.905 us
Message rate:       22483 msg/s

Unix domain sockets benchmark:

Unix 域套接字基准测试:

Message size:       128
Message count:      1000000
Total duration:     24579.846 ms
Average duration:   24.531 us
Minimum duration:   2.560 us
Maximum duration:   15932.928 us
Standard deviation: 37.854 us
Message rate:       40683 msg/s

ZeroMQ benchmark:

ZeroMQ 基准测试:

Message size:       128
Message count:      1000000
Total duration:     64872.327 ms
Average duration:   64.808 us
Minimum duration:   23.552 us
Maximum duration:   16443.392 us
Standard deviation: 133.483 us
Message rate:       15414 msg/s