multithreading 非阻塞 I/O 真的比多线程阻塞 I/O 快吗?如何?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/8546273/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Is non-blocking I/O really faster than multi-threaded blocking I/O? How?
提问by yankee
I searched the web on some technical details about blocking I/O and non blocking I/O and I found several people stating that non-blocking I/O would be faster than blocking I/O. For example in this document.
我在网上搜索了一些关于阻塞 I/O 和非阻塞 I/O 的技术细节,我发现有几个人说非阻塞 I/O 会比阻塞 I/O 更快。例如在本文档中。
If I use blocking I/O, then of course the thread that is currently blocked can't do anything else... Because it's blocked. But as soon as a thread starts being blocked, the OS can switch to another thread and not switch back until there is something to do for the blocked thread. So as long as there is another thread on the system that needs CPU and is not blocked, there should not be any more CPU idle time compared to an event based non-blocking approach, is there?
如果我使用阻塞 I/O,那么当前阻塞的线程当然不能做任何其他事情……因为它被阻塞了。但是一旦一个线程开始被阻塞,操作系统就可以切换到另一个线程,并且在被阻塞的线程有事情要做之前不会切换回来。所以只要系统上还有另一个线程需要 CPU 并且没有被阻塞,与基于事件的非阻塞方法相比,CPU 空闲时间就不应该多,是吗?
Besides reducing the time the CPU is idle I see one more option to increase the number of tasks a computer can perform in a given time frame: Reduce the overhead introduced by switching threads. But how can this be done? And is the overhead large enough to show measurable effects? Here is an idea on how I can picture it working:
除了减少 CPU 空闲时间之外,我还看到了增加计算机在给定时间范围内可以执行的任务数量的另一种选择:减少由切换线程引入的开销。但是怎么做呢?开销是否足够大以显示可衡量的效果?这是我如何想象它工作的想法:
- To load the contents of a file, an application delegates this task to an event-based i/o framework, passing a callback function along with a filename
- The event framework delegates to the operating system, which programs a DMA controller of the hard disk to write the file directly to memory
- The event framework allows further code to run.
- Upon completion of the disk-to-memory copy, the DMA controller causes an interrupt.
- The operating system's interrupt handler notifies the event-based i/o framework about the file being completely loaded into memory. How does it do that? Using a signal??
- The code that is currently run within the event i/o framework finishes.
- The event-based i/o framework checks its queue and sees the operating system's message from step 5 and executes the callback it got in step 1.
- 要加载文件的内容,应用程序将此任务委托给基于事件的 i/o 框架,并传递回调函数和文件名
- 事件框架委托给操作系统,它对硬盘的 DMA 控制器进行编程以将文件直接写入内存
- 事件框架允许运行更多代码。
- 磁盘到内存复制完成后,DMA 控制器会导致中断。
- 操作系统的中断处理程序通知基于事件的 i/o 框架有关文件已完全加载到内存中的信息。它是如何做到的?用信号??
- 当前在事件 I/O 框架内运行的代码完成。
- 基于事件的 i/o 框架检查其队列并查看来自步骤 5 的操作系统消息并执行它在步骤 1 中获得的回调。
Is that how it works? If it does not, how does it work? That means that the event system can work without ever having the need to explicitly touch the stack (such as a real scheduler that would need to backup the stack and copy the stack of another thread into memory while switching threads)? How much time does this actually save? Is there more to it?
它是这样工作的吗?如果没有,它是如何工作的?这意味着事件系统可以在不需要显式接触堆栈的情况下工作(例如,真正的调度程序需要备份堆栈并将另一个线程的堆栈复制到内存中,同时切换线程)?这实际上节省了多少时间?还有更多吗?
采纳答案by Werner Henze
The biggest advantage of nonblocking or asynchronous I/O is that your thread can continue its work in parallel. Of course you can achieve this also using an additional thread. As you stated for best overall (system) performance I guess it would be better to use asynchronous I/O and not multiple threads (so reducing thread switching).
非阻塞或异步 I/O 的最大优点是您的线程可以并行继续其工作。当然,您也可以使用附加线程来实现这一点。正如您所说的最佳整体(系统)性能,我想最好使用异步 I/O 而不是多线程(因此减少线程切换)。
Let's look at possible implementations of a network server program that shall handle 1000 clients connected in parallel:
让我们看看一个网络服务器程序的可能实现,它可以处理 1000 个并行连接的客户端:
- One thread per connection (can be blocking I/O, but can also be non-blocking I/O).
Each thread requires memory resources (also kernel memory!), that is a disadvantage. And every additional thread means more work for the scheduler. - One thread for all connections.
This takes load from the system because we have fewer threads. But it also prevents you from using the full performance of your machine, because you might end up driving one processor to 100% and letting all other processors idle around. - A few threads where each thread handles some of the connections.
This takes load from the system because there are fewer threads. And it can use all available processors. On Windows this approach is supported by Thread Pool API.
- 每个连接一个线程(可以是阻塞 I/O,也可以是非阻塞 I/O)。
每个线程都需要内存资源(也是内核内存!),这是一个缺点。每个额外的线程都意味着调度程序需要做更多的工作。 - 一个线程用于所有连接。
这会从系统中获取负载,因为我们的线程较少。但它也会阻止您使用机器的全部性能,因为您最终可能会将一个处理器驱动到 100%,而让所有其他处理器闲置。 - 几个线程,其中每个线程处理一些连接。
这会从系统中加载负载,因为线程较少。它可以使用所有可用的处理器。在 Windows 上,线程池 API支持这种方法。
Of course having more threads is not per se a problem. As you might have recognized I chose quite a high number of connections/threads. I doubt that you'll see any difference between the three possible implementations if we are talking about only a dozen threads (this is also what Raymond Chen suggests on the MSDN blog post Does Windows have a limit of 2000 threads per process?).
当然,拥有更多线程本身并不是问题。您可能已经认识到,我选择了相当多的连接/线程。如果我们只讨论十几个线程(这也是 Raymond Chen 在 MSDN 博客文章Does Windowshas the limit of 2000 threads per process? 中提出的建议),我怀疑您会发现三种可能的实现之间有什么区别。
On Windows using unbuffered file I/Omeans that writes must be of a size which is a multiple of the page size. I have not tested it, but it sounds like this could also affect write performance positively for buffered synchronous and asynchronous writes.
在 Windows 上使用无缓冲文件 I/O意味着写入的大小必须是页面大小的倍数。我还没有测试过它,但听起来这也会对缓冲同步和异步写入的写入性能产生积极的影响。
The steps 1 to 7 you describe give a good idea of how it works. On Windows the operating system will inform you about completion of an asynchronous I/O (WriteFile
with OVERLAPPED
structure) using an event or a callback. Callback functions will only be called for example when your code calls WaitForMultipleObjectsEx
with bAlertable
set to true
.
您描述的步骤 1 到 7 很好地说明了它是如何工作的。在 Windows 上,操作系统将使用事件或回调通知您异步 I/O(WriteFile
带OVERLAPPED
结构)的完成。回调函数才会被调用,例如当你的代码的调用WaitForMultipleObjectsEx
与bAlertable
设置为true
。
Some more reading on the web:
在网络上阅读更多内容:
- Multiple Threads in the User Interfaceon MSDN, also shortly handling the cost of creating threads
- Section Threads and Thread Poolssays "Although threads are relatively easy to create and use, the operating system allocates a significant amount of time and other resources to manage them."
- CreateThread documentation on MSDNsays "However, your application will have better performance if you create one thread per processor and build queues of requests for which the application maintains the context information.".
- Old article Why Too Many Threads Hurts Performance, and What to do About It
- MSDN用户界面中的多线程,也很快处理了创建线程的成本
- 科线程和线程池说:“虽然线程是比较容易创建和使用的操作系统分配的时间和其他资源显著量来管理它们。”
- MSDN 上的 CreateThread 文档说“但是,如果您为每个处理器创建一个线程并构建应用程序维护上下文信息的请求队列,您的应用程序将具有更好的性能。”。
- 旧文章为什么线程过多会影响性能,以及如何处理
回答by Florin Dumitrescu
I/O includes multiple kind of operations like reading and writing data from hard drives, accessing network resources, calling web services or retrieving data from databases. Depending on the platform and on the kind of operation, asynchronous I/O will usually take advantage of any hardware or low level system support for performing the operation. This means that it will be performed with as little impact as possible on the CPU.
I/O 包括多种操作,如从硬盘读取和写入数据、访问网络资源、调用 Web 服务或从数据库检索数据。根据平台和操作类型,异步 I/O 通常会利用任何硬件或低级系统支持来执行操作。这意味着它将在对 CPU 影响尽可能小的情况下执行。
At application level, asynchronous I/O prevents threads from having to wait for I/O operations to complete. As soon as an asynchronous I/O operation is started, it releases the thread on which it was launched and a callback is registered. When the operation completes, the callback is queued for execution on the first available thread.
在应用程序级别,异步 I/O 可防止线程不得不等待 I/O 操作完成。异步 I/O 操作一开始,它就会释放启动它的线程并注册一个回调。操作完成后,回调将排队等待在第一个可用线程上执行。
If the I/O operation is executed synchronously, it keeps its running thread doing nothing until the operation completes. The runtime doesn't know when the I/O operation completes, so it will periodically provide some CPU time to the waiting thread, CPU time that could have otherwise be used by other threads that have actual CPU bound operations to perform.
如果 I/O 操作是同步执行的,它会保持其正在运行的线程不执行任何操作,直到操作完成。运行时不知道 I/O 操作何时完成,因此它会定期为等待线程提供一些 CPU 时间,否则这些 CPU 时间可能会被其他具有实际 CPU 绑定操作要执行的线程使用。
So, as @user1629468 mentioned, asynchronous I/O does not provide better performance but rather better scalability. This is obvious when running in contexts that have a limited number of threads available, like it is the case with web applications. Web application usually use a thread pool from which they assign threads to each request. If requests are blocked on long running I/O operations there is the risk of depleting the web pool and making the web application freeze or slow to respond.
因此,正如@user1629468 所提到的,异步 I/O 并没有提供更好的性能,而是提供更好的可扩展性。当在可用线程数量有限的上下文中运行时,这一点很明显,就像 Web 应用程序的情况一样。Web 应用程序通常使用一个线程池,它们从中为每个请求分配线程。如果请求在长时间运行的 I/O 操作中被阻塞,则存在耗尽 Web 池并使 Web 应用程序冻结或响应缓慢的风险。
One thing I have noticed is that asynchronous I/O isn't the best option when dealing with very fast I/O operations. In that case the benefit of not keeping a thread busy while waiting for the I/O operation to complete is not very important and the fact that the operation is started on one thread and it is completed on another adds an overhead to the overall execution.
我注意到的一件事是,在处理非常快速的 I/O 操作时,异步 I/O 并不是最佳选择。在那种情况下,在等待 I/O 操作完成时不让线程保持忙碌的好处不是很重要,并且操作在一个线程上启动并在另一个线程上完成这一事实增加了整体执行的开销。
You can read a more detailed research I have recently made on the topic of asynchronous I/O vs. multithreading here.
您可以在此处阅读我最近针对异步 I/O 与多线程进行的更详细的研究。
回答by ely
To presume a speed improvement due to any form of multi-computing you must presume either that multiple CPU-based tasks are being executed concurrently upon multiple computing resources (generally processor cores) or else that not all of the tasks rely upon the concurrent usage of the same resource -- that is, some tasks may depend on one system subcomponent (disk storage, say) while some tasks depend on another (receiving communication from a peripheral device) and still others may require usage of processor cores.
要假定由于任何形式的多计算而提高速度,您必须假定多个基于 CPU 的任务正在多个计算资源(通常为处理器内核)上同时执行,或者并非所有任务都依赖于并发使用相同的资源——也就是说,一些任务可能依赖于一个系统子组件(比如磁盘存储),而一些任务依赖于另一个(从外围设备接收通信),还有一些任务可能需要使用处理器内核。
The first scenario is often referred to as "parallel" programming. The second scenario is often referred to as "concurrent" or "asynchronous" programming, although "concurrent" is sometimes also used to refer to the case of merely allowing an operating system to interleave execution of multiple tasks, regardless of whether such execution must take place serially or if multiple resources can be used to achieve parallel execution. In this latter case, "concurrent" generally refers to the way that execution is written in the program, rather than from the perspective of the actual simultaneity of task execution.
第一种情况通常称为“并行”编程。第二种情况通常被称为“并发”或“异步”编程,尽管“并发”有时也用于指仅允许操作系统交错执行多个任务的情况,而不管这种执行是否必须顺序放置或者如果可以使用多个资源来实现并行执行。在后一种情况下,“并发”一般是指将执行写入程序中的方式,而不是从任务执行的实际同时性的角度来看。
It's very easy to speak about all of this with tacit assumptions. For example, some are quick to make a claim such as "Asynchronous I/O will be faster than multi-threaded I/O." This claim is dubious for several reasons. First, it could be the case that some given asynchronous I/O framework is implemented precisely with multi-threading, in which case they are one in the same and it doesn't make sense to say one concept "is faster than" the other.
用默认的假设来谈论所有这些是很容易的。例如,有些人很快就宣称“异步 I/O 将比多线程 I/O 更快”。由于几个原因,这种说法是可疑的。首先,某些给定的异步 I/O 框架可能是用多线程精确实现的,在这种情况下,它们是同一个概念,说一个概念“比”另一个概念“快”是没有意义的.
Second, even in the case when there is a single-threaded implementation of an asynchronous framework (such as a single-threaded event loop) you must still make an assumption about what that loop is doing. For example, one silly thing you can do with a single-threaded event loop is request for it to asynchronously complete two different purely CPU-bound tasks. If you did this on a machine with only an idealized single processor core (ignoring modern hardware optimizations) then performing this task "asynchronously" wouldn't really perform any differently than performing it with two independently managed threads, or with just one lone process -- the difference might come down to thread context switching or operating system schedule optimizations, but if both tasks are going to the CPU it would be similar in either case.
其次,即使在异步框架的单线程实现(例如单线程事件循环)的情况下,您仍然必须假设该循环正在做什么。例如,你可以用单线程事件循环做的一件愚蠢的事情是请求它异步完成两个不同的纯 CPU 绑定任务。如果您在只有理想化的单处理器内核(忽略现代硬件优化)的机器上执行此操作,那么“异步”执行此任务与使用两个独立管理的线程或仅使用一个单独的进程执行它实际上没有任何不同 - - 差异可能归结为线程上下文切换或操作系统调度优化,但如果两个任务都进入 CPU,则在任何一种情况下都是相似的。
It is useful to imagine a lot of the unusual or stupid corner cases you might run into.
想象一下您可能遇到的许多不寻常或愚蠢的极端情况是很有用的。
"Asynchronous" does not have to be concurrent, for example just as above: you "asynchronously" execute two CPU-bound tasks on a machine with exactly one processor core.
“异步”不一定是并发的,例如上面的例子:你在一台只有一个处理器核心的机器上“异步”执行两个受 CPU 限制的任务。
Multi-threaded execution doesn't have to be concurrent: you spawn two threads on a machine with a single processor core, or ask two threads to acquire any other kind of scarce resource (imagine, say, a network database that can only establish one connection at a time). The threads' execution might be interleavedhowever the operating system scheduler sees fit, but their total runtime cannot be reduced (and will be increased from the thread context switching) on a single core (or more generally, if you spawn more threads than there are cores to run them, or have more threads asking for a resource than what the resource can sustain). This same thing goes for multi-processing as well.
多线程执行不一定是并发的:您在具有单个处理器内核的机器上生成两个线程,或者要求两个线程获取任何其他类型的稀缺资源(想象一下,一个只能建立一个的网络数据库)一次连接)。线程的执行可能会交错执行,但操作系统调度程序认为合适,但它们的总运行时间不能在单个内核上减少(并且会从线程上下文切换中增加)(或者更一般地说,如果您生成的线程数超过内核来运行它们,或者有更多的线程请求资源而不是资源可以承受的数量)。这同样适用于多处理。
So neither asynchronous I/O nor multi-threading have to offer any performance gain in terms of run time. They can even slow things down.
因此,无论是异步 I/O 还是多线程都不必在运行时间方面提供任何性能提升。他们甚至可以放慢速度。
If you define a specific use case, however, like a specific program that both makes a network call to retrieve data from a network-connected resource like a remote database and also does some local CPU-bound computation, then you can start to reason about the performance differences between the two methods given a particular assumption about hardware.
但是,如果你定义了一个特定的用例,比如一个特定的程序,它既进行网络调用以从网络连接的资源(如远程数据库)检索数据,又进行一些本地 CPU 密集型计算,那么你可以开始推理给定关于硬件的特定假设,两种方法之间的性能差异。
The questions to ask: How many computational steps do I need to perform and how many independent systems of resources are there to perform them? Are there subsets of the computational steps that require usage of independent system subcomponents and can benefit from doing so concurrently? How many processor cores do I have and what is the overhead for using multiple processors or threads to complete tasks on separate cores?
要问的问题:我需要执行多少计算步骤以及有多少独立的资源系统来执行它们?是否有计算步骤的子集需要使用独立的系统子组件,并且可以同时受益?我有多少个处理器内核,使用多个处理器或线程在不同内核上完成任务的开销是多少?
If your tasks largely rely on independent subsystems, then an asynchronous solution might be good. If the number of threads needed to handle it would be large, such that context switching became non-trivial for the operating system, then a single-threaded asynchronous solution might be better.
如果您的任务很大程度上依赖于独立的子系统,那么异步解决方案可能是好的。如果处理它所需的线程数量很大,以至于操作系统的上下文切换变得非常重要,那么单线程异步解决方案可能会更好。
Whenever the tasks are bound by the same resource (e.g. multiple needs to concurrently access the same network or local resource), then multi-threading will probably introduce unsatisfactory overhead, and while single-threaded asynchrony mayintroduce less overhead, in such a resource-limited situation it too cannot produce a speed-up. In such a case, the only option (if you want a speed-up) is to make multiple copies of that resource available (e.g. multiple processor cores if the scarce resource is CPU; a better database that supports more concurrent connections if the scarce resource is a connection-limited database, etc.).
每当任务被同一个资源绑定时(例如多个需要同时访问同一个网络或本地资源),那么多线程很可能会引入不令人满意的开销,而单线程异步可能会引入较少的开销,在这样的资源中——有限的情况也不能产生加速。在这种情况下,唯一的选择(如果您想要加速)是使该资源的多个副本可用(例如,如果稀缺资源是 CPU,则使用多个处理器内核;如果稀缺资源,则支持更多并发连接的更好的数据库是连接受限的数据库等)。
Another way to put it is: allowing the operating system to interleave the usage of a single resource for two tasks cannotbe faster than merely letting one task use the resource while the other waits, then letting the second task finish serially. Further, the scheduler cost of interleaving means in any real situation it actually creates a slowdown. It doesn't matter if the interleaved usage occurs of the CPU, a network resource, a memory resource, a peripheral device, or any other system resource.
另一种说法是:允许操作系统为两个任务交错使用单个资源不能比仅仅让一个任务使用资源而另一个任务等待,然后让第二个任务连续完成更快。此外,交织的调度器成本意味着在任何实际情况下它实际上都会造成减速。CPU、网络资源、内存资源、外围设备或任何其他系统资源的交错使用是否发生都无关紧要。
回答by fissurezone
The main reason to use AIO is for scalability. When viewed in the context of a few threads, the benefits are not obvious. But when the system scales to 1000s of threads, AIO will offer much better performance. The caveat is that AIO library should not introduce further bottlenecks.
使用 AIO 的主要原因是为了可扩展性。在几个线程的上下文中查看时,好处并不明显。但是当系统扩展到 1000 个线程时,AIO 将提供更好的性能。需要注意的是,AIO 库不应引入进一步的瓶颈。
回答by user2826084
I am currently in the process of implementing async io on an embedded platform using protothreads. Non blocking io makes the difference between running at 16000fps and 160fps. The biggest benefit of non blocking io is that you can structure your code to do other things while hardware does its thing. Even initialization of devices can be done in parallel.
我目前正在使用 protothreads 在嵌入式平台上实现异步 io。非阻塞 io 在以 16000fps 和 160fps 运行之间存在差异。非阻塞 io 的最大好处是,您可以构建代码来做其他事情,而硬件则做它的事情。甚至设备的初始化也可以并行完成。
Martin
马丁
回答by Miguel
One possible implementation of non-blocking I/O is exactly what you said, with a pool of background threads that do blocking I/O and notify the thread of the originator of the I/O via some callback mechanism. In fact, this is how the AIOmodule in glibc works. Hereare some vague details about the implementation.
非阻塞 I/O 的一种可能实现正是您所说的,后台线程池执行阻塞 I/O 并通过某种回调机制通知 I/O 发起者的线程。实际上,glibc 中的AIO模块就是这样工作的。以下是有关实施的一些模糊细节。
While this is a good solution that is quite portable (as long as you have threads), the OS is typically able to service non-blocking I/O more efficiently. This Wikipedia articlelists possible implementations besides the thread pool.
虽然这是一个很好的解决方案,而且非常便携(只要您有线程),操作系统通常能够更有效地为非阻塞 I/O 提供服务。这篇维基百科文章列出了线程池之外的可能实现。
回答by SmokestackLightning
In Node, multiple threads are being launched, but it's a layer down in the C++ run-time.
在 Node 中,正在启动多个线程,但它是 C++ 运行时的一个底层。
"So Yes NodeJS is single threaded, but this is a half truth, actually it is?event-driven and single-threaded with background workers.?The main event loop is single-threaded but most of the I/O works run on separate threads, because the I/O APIs in Node.js are asynchronous/non-blocking by design, in order to accommodate the event loop. "
“所以是的 NodeJS 是单线程的,但这是半真半假的,实际上它是?事件驱动和单线程与后台工作者。?主事件循环是单线程的,但大多数 I/O 工作运行在单独的线程,因为 Node.js 中的 I/O API 在设计上是异步/非阻塞的,以适应事件循环。”
"Node.js is non-blocking which means that all functions ( callbacks ) are delegated to the event loop and they are ( or can be ) executed by different threads. That is handled by Node.js run-time."
“Node.js 是非阻塞的,这意味着所有函数(回调)都被委托给事件循环,它们由(或可以)由不同的线程执行。这是由 Node.js 运行时处理的。”
https://itnext.io/multi-threading-and-multi-process-in-node-js-ffa5bb5cde98?
https://itnext.io/multi-threading-and-multi-process-in-node-js-ffa5bb5cde98?
The "Node is faster because it's non-blocking..." explanation is a bit of marketing and this is a great question. It's efficient and scaleable, but not exactly single threaded.
“节点更快,因为它是非阻塞的......”解释有点营销,这是一个很好的问题。它高效且可扩展,但不完全是单线程的。
回答by Zhidian Du
Let me give you a counterexample that asynchronous I/O does not work. I am writing a proxy similar to below-using boost::asio. https://github.com/ArashPartow/proxy/blob/master/tcpproxy_server.cpp
我给你举个反例,异步 I/O 不起作用。我正在编写一个类似于下面使用 boost::asio 的代理。 https://github.com/ArashPartow/proxy/blob/master/tcpproxy_server.cpp
However, the scenario of my case is, incoming (from clients side) messages are fast while outgoing (to server side) is slow for one session, to keep up with the incoming speed or to maximize the total proxy throughput, we have to use multiple sessions under one connection.
但是,我的情况是,传入(来自客户端)消息很快,而传出(到服务器端)对于一个会话来说很慢,为了跟上传入速度或最大化总代理吞吐量,我们必须使用一个连接下的多个会话。
Thus this async I/O framework does not work anymore. We do need a thread pool to send to the server by assigning each thread a session.
因此这个异步 I/O 框架不再起作用。我们确实需要一个线程池通过为每个线程分配一个会话来发送到服务器。
回答by Felice Pollano
The improvement as far as I know is that Asynchronous I/O uses ( I'm talking about MS System, just to clarify ) the so called I/O completion ports. By using the Asynchronous call the framework leverage such architecture automatically, and this is supposed to be much more efficient that standard threading mechanism. As a personal experience I can say that you would sensibly feel your application more reactive if you prefer AsyncCalls instead of blocking threads.
据我所知,改进之处在于异步 I/O 使用(我说的是 MS 系统,只是为了澄清)所谓的 I/O 完成端口。通过使用异步调用,框架会自动利用这种架构,这应该比标准线程机制更有效。作为个人经验,我可以说,如果您更喜欢 AsyncCalls 而不是阻塞线程,您会明显地感觉到您的应用程序更具反应性。