Linux DMA 和内存映射 IO 有什么区别?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/3851677/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-04 23:33:49  来源:igfitidea点击:

What is the difference between DMA and memory-mapped IO?

linuxoperating-systemlinux-kernel

提问by brett

What is the difference between DMA and memory-mapped IO? They both look similar to me.

DMA 和内存映射 IO 有什么区别?他们俩长得都跟我很像。

采纳答案by Greg Hewgill

Memory-mapped I/O allows the CPU to control hardware by reading and writing specific memory addresses. Usually, this would be used for low-bandwidth operations such as changing control bits.

内存映射 I/O 允许 CPU 通过读取和写入特定内存地址来控制硬件。通常,这将用于低带宽操作,例如更改控制位。

DMA allows hardware to directly read and write memory withoutinvolving the CPU. Usually, this would be used for high-bandwidth operations such as disk I/O or camera video input.

DMA 允许硬件直接读写内存而不需要 CPU。通常,这将用于高带宽操作,例如磁盘 I/O 或相机视频输入。

Here is a paper has a thorough comparison between MMIO and DMA.

这里有一篇论文对 MMIO 和 DMA 进行了彻底的比较。

Design Guidelines for High Performance RDMA Systems

高性能 RDMA 系统设计指南

回答by caf

Memory-mapped IO means that the device registers are mapped into the machine's memory space - when those memory regions are read or written by the CPU, it's reading from or writing to the device, rather than real memory. To transfer data from the device to an actual memory buffer, the CPU has to read the data from the memory-mapped device registers and write it to the buffer (and the converse for transferring data to the device).

内存映射 IO 意味着设备寄存器被映射到机器的内存空间——当这些内存区域被 CPU 读取或写入时,它正在读取或写入设备,而不是真正的内存。要将数据从设备传输到实际的内存缓冲区,CPU 必须从内存映射设备寄存器中读取数据并将其写入缓冲区(与将数据传输到设备相反)。

With a DMA transfer, the device is able to directly transfer data to or from a real memory buffer itself. The CPU tells the device the location of the buffer, and then can perform other work while the device is directly accessing memory.

通过 DMA 传输,设备能够直接将数据传输到或从实际内存缓冲区本身传输。CPU 告诉设备缓冲区的位置,然后可以在设备直接访问内存的同时执行其他工作。

回答by Eric Seppanen

Since others have already answered the question, I'll just add a little bit of history.

既然其他人已经回答了这个问题,我就补充一点历史。

Back in the old days, on x86 (PC) hardware, there was only I/O space and memory space. These were two different address spaces, accessed with different bus protocol and different CPU instructions, but able to talk over the same plug-in card slot.

过去,在 x86 (PC) 硬件上,只有 I/O 空间和内存空间。这是两个不同的地址空间,使用不同的总线协议和不同的 CPU 指令访问,但能够通过同一个插件卡插槽进行通信。

Most devices used I/O space for both the control interface and the bulk data-transfer interface. The simple way to access data was to execute lots of CPU instructions to transfer data one word at a time from an I/O address to a memory address (sometimes known as "bit-banging.")

大多数设备将 I/O 空间用于控制接口和批量数据传输接口。访问数据的简单方法是执行大量 CPU 指令,一次一个字地将数据从 I/O 地址传输到内存地址(有时称为“bit-banging”)。

In order to move data from devices to host memory autonomously, there was no support in the ISA bus protocol for devices to initiate transfers. A compromise solution was invented: the DMA controller. This was a piece of hardware that sat up by the CPU and initiated transfers to move data from a device's I/O address to memory, or vice versa. Because the I/O address is the same, the DMA controller is doing the exact same operations as a CPU would, but a little more efficiently and allowing some freedom to keep running in the background (though possibly not for long as it can't talk to memory).

为了自主地将数据从设备移动到主机内存,ISA 总线协议不支持设备启动传输。发明了一种折衷的解决方案:DMA 控制器。这是一块硬件,由 CPU 启动并启动传输以将数据从设备的 I/O 地址移动到内存,反之亦然。由于 I/O 地址相同,DMA 控制器执行的操作与 CPU 完全相同,但效率更高一些,并允许在后台继续运行(尽管可能不会持续很长时间,因为它不能与记忆对话)。

Fast-forward to the days of PCI, and the bus protocols got a lot smarter: any device can initiate a transfer. So it's possible for, say, a RAID controller card to move any data it likes to or from the host at any time it likes. This is called "bus master" mode, but for no particular reason people continue to refer to this mode as "DMA" even though the old DMA controller is long gone. Unlike old DMA transfers, there is frequently no corresponding I/O address at all, and the bus master mode is frequently the only interface present on the device, with no CPU "bit-banging" mode at all.

快进到 PCI 时代,总线协议变得更加智能:任何设备都可以发起传输。因此,例如,RAID 控制器卡可以在它喜欢的任何时间将它喜欢的任何数据移入或移出主机。这被称为“总线主控”模式,但人们无缘无故地继续将这种模式称为“DMA”,即使旧的 DMA 控制器早已不复存在。与旧的 DMA 传输不同,通常根本没有相应的 I/O 地址,并且总线主模式通常是设备上存在的唯一接口,根本没有 CPU 的“bit-banging”模式。

回答by Usman Gill

Direct Memory Access (DMA) is a technique to transfer the data from I/O to memory and from memory to I/O without the intervention of the CPU. For this purpose, a special chip, named DMA controller, is used to control all activities and synchronization of data. As result, compare to other data transfer techniques, DMA is much faster.

直接内存访问 (DMA) 是一种无需 CPU 干预即可将数据从 I/O 传输到内存以及从内存传输到 I/O 的技术。为此,使用名为 DMA 控制器的特殊芯片来控制所有活动和数据同步。因此,与其他数据传输技术相比,DMA 速度要快得多。

On the other hand, Virtual memory acts as a cache between main memory and secondary memory. Data is fetched in advance from the secondary memory (hard disk) into the main memory so that data is already available in the main memory when needed. It allows us to run more applications on the system than we have enough physical memory to support.

另一方面,虚拟内存充当主内存和辅助内存之间的缓存。数据是预先从辅助存储器(硬盘)中取出到主存储器中的,以便在需要时数据已经在主存储器中可用。它允许我们在系统上运行比我们有足够的物理内存支持的更多的应用程序。

enter image description here

在此处输入图片说明