使用哪种 Linux IPC 技术?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/2281204/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-03 19:46:10  来源:igfitidea点击:

Which Linux IPC technique to use?

linuxipc

提问by RishiD

We are still in the design-phase of our project but we are thinking of having three separate processes on an embedded Linux kernel. One of the processes with be a communications module which handles all communications to and from the device through various mediums.

我们仍处于项目的设计阶段,但我们正在考虑在嵌入式 Linux 内核上拥有三个独立的进程。其中一个进程是一个通信模块,它处理通过各种媒体进出设备的所有通信。

The other two processes will need to be able to send/receive messages through the communication process. I am trying to evaluate the IPC techniques that Linux provides; the message the other processes will be sending will vary in size, from debug logs to streaming media at ~5 Mbit rate. Also, the media could be streaming in and out simultaneously.

另外两个进程需要能够通过通信进程发送/接收消息。我正在尝试评估 Linux 提供的 IPC 技术;其他进程将发送的消息大小会有所不同,从调试日志到大约 5 Mbit 速率的流媒体。此外,媒体可以同时流入和流出。

Which IPC technique would you suggestion for this application? http://en.wikipedia.org/wiki/Inter-process_communication

您会为此应用推荐哪种 IPC 技术? http://en.wikipedia.org/wiki/Inter-process_communication

Processor is running around 400-500 Mhz if that changes anything. Does not need to be cross-platform, Linux only is fine. Implementation in C or C++ is required.

如果有任何改变,处理器运行在 400-500 Mhz 左右。不需要跨平台,只要Linux就可以。需要在 C 或 C++ 中实现。

采纳答案by jldupont

I would go for Unix Domain Sockets: less overhead than IP sockets (i.e. no inter-machine comms) but same convenience otherwise.

我会选择 Unix 域套接字:开销比 IP 套接字少(即没有机器间通信),但在其他方面同样方便。

回答by jschmier

When selecting your IPC you should consider causes for performance differences including transfer buffer sizes, data transfer mechanisms, memory allocation schemes, locking mechanism implementations, and even code complexity.

在选择您的 IPC 时,您应该考虑性能差异的原因,包括传输缓冲区大小、数据传输机制、内存分配方案、锁定机制实现,甚至代码复杂性。

Of the available IPC mechanisms, the choice for performance often comes down to Unix domain socketsor named pipes (FIFOs). I read a paper on Performance Analysis of Various Mechanisms for Inter-process Communicationthat indicates Unix domain sockets for IPC may provide the best performance. I have seen conflicting results elsewherewhich indicate pipes may be better.

在可用的 IPC 机制中,性能的选择通常归结为Unix 域套接字命名管道 (FIFO)。我读了一篇关于进程间通信的各种机制的性能分析的论文,该论文表明 IPC 的 Unix 域套接字可能提供最佳性能。我在其他地方看到了相互矛盾的结果,这表明管道可能更好。

When sending small amounts of data, I prefer named pipes (FIFOs) for their simplicity. This requires a pair of named pipes for bi-directional communication. Unix domain sockets take a bit more overhead to setup (socket creation, initialization and connection), but are more flexible and may offer better performance (higher throughput).

在发送少量数据时,我更喜欢命名管道 (FIFO),因为它们很简单。这需要一对命名管道进行双向通信。Unix 域套接字需要更多的开销来设置(套接字创建、初始化和连接),但更灵活并且可以提供更好的性能(更高的吞吐量)。

You may need to run some benchmarks for your specific application/environment to determine what will work best for you. From the description provided, it sounds like Unix domain sockets may be the best fit.

您可能需要针对您的特定应用程序/环境运行一些基准测试,以确定什么最适合您。从提供的描述来看,听起来 Unix 域套接字可能是最合适的。



Beej's Guide to Unix IPCis good for getting started with Linux/Unix IPC.

Beej 的 Unix IPC 指南非常适合开始使用 Linux/Unix IPC。

回答by MarkR

If performance really becomes a problem you can use shared memory - but it's a lot more complicated than the other methods - you'll need a signalling mechanism to signal that data is ready (semaphore etc) as well as locks to prevent concurrent access to structures while they're being modified.

如果性能真的成为一个问题,你可以使用共享内存——但它比其他方法复杂得多——你需要一个信号机制来发出数据准备就绪的信号(信号量等)以及防止并发访问结构的锁当它们被修改时。

The upside is that you can transfer a lot of data without having to copy it in memory, which will definitely improve performance in some cases.

好处是您可以传输大量数据而无需将其复制到内存中,这在某些情况下肯定会提高性能。

Perhaps there are usable libraries which provide higher level primitives via shared memory.

也许有可用的库通过共享内存提供更高级别的原语。

Shared memory is generally obtained by mmaping the same file using MAP_SHARED (which can be on a tmpfs if you don't want it persisted); a lot of apps also use System V shared memory (IMHO for stupid historical reasons; it's a much less nice interface to the same thing)

共享内存通常是通过使用 MAP_SHARED 映射同一个文件来获得的(如果你不希望它持久化,可以在 tmpfs 上);许多应用程序也使用 System V 共享内存(恕我直言,出于愚蠢的历史原因;对于同一件事,它的界面要差得多)

回答by Dipstick

Can't believe nobody has mentioned dbus.

不敢相信没有人提到过 dbus。

http://www.freedesktop.org/wiki/Software/dbus

http://www.freedesktop.org/wiki/Software/dbus

http://en.wikipedia.org/wiki/D-Bus

http://en.wikipedia.org/wiki/D-Bus

Might be a bit over the top if your application is architecturally simple, in which case - in a controlled embedded environment where performance is crucial - you can't beat shared memory.

如果您的应用程序在架构上很简单,则可能有点过头了,在这种情况下 - 在性能至关重要的受控嵌入式环境中 - 您无法击败共享内存。

回答by jeremiah

As of this writing (November 2014) Kdbus and Binder have left the staging branch of the linux kernel. There is no guarantee at this point that either will make it in, but the outlook is somewhat positive for both. Binder is a lightweight IPC mechanism in Android, Kdbus is a dbus-like IPC mechanism in the kernel which reduces context switch thus greatly speeding up messaging.

在撰写本文时(2014 年 11 月),Kdbus 和 Binder 已经离开了 linux 内核的暂存分支。目前还不能保证两者都会成功,但两者的前景都有些乐观。Binder 是 Android 中的轻量级 IPC 机制,Kdbus 是内核中类似 dbus 的 IPC 机制,它减少了上下文切换从而大大加快了消息传递。

There is also "Transparent Inter-Process Communication" or TIPC, which is robust, useful for clustering and multi-node set ups; http://tipc.sourceforge.net/

还有“透明进程间通信”或 TIPC,它很健壮,对集群和多节点设置很有用;http://tipc.sourceforge.net/

回答by c0der

Unix domain sockets will address most of your IPC requirements. You don't really need a dedicated communication process in this case since kernel provides this IPC facility. Also, look at POSIX message queues which in my opinion is one of the most under-utilized IPC in Linux but comes very handy in many cases where n:1 communications are needed.

Unix 域套接字将满足您的大部分 IPC 要求。在这种情况下,您实际上并不需要专用的通信过程,因为内核提供了此 IPC 功能。另外,看看 POSIX 消息队列,我认为它是 Linux 中使用率最低的 IPC 之一,但在许多需要 n:1 通信的情况下非常方便。