Java 数组访问复杂度
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/20615908/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Array Access Complexity
提问by Nikhil
In Java supppose I need to access array1[index]
many times in the code.
在 Java 中假设我需要array1[index]
在代码中多次访问。
Even for extremely large arrays, can I assume each single array access takes constant time?
Can this differ between languages or underlying architecture?
即使对于非常大的数组,我是否可以假设每个单个数组访问都需要恒定时间?
这在语言或底层架构之间会有所不同吗?
采纳答案by T.J. Crowder
For large values of array1 size N can I assume each single array access (array1[index]) takes constant time?
对于 array1 大小为 N 的大值,我可以假设每个单个数组访问 (array1[index]) 需要恒定时间吗?
In Java, yes. Also in C, C++, and C#, barring OS-level memory paging issues that are presumably out of scope.
在 Java 中,是的。同样在 C、C++ 和 C# 中,排除可能超出范围的操作系统级内存分页问题。
Does this access time depend on language( java vs C++) or the underlying architecture ?
此访问时间是否取决于语言(Java 与 C++)或底层架构?
It can, if the language in question calls things "arrays" that aren't really arrays in the usual "contiguous block of memory" sense. (JavaScript does that; its Array
([]
) type is really a map; PHP uses the term "array" as shorthand for "associative array" [e.g., map].) So for a given environment/language, it's worth checking that the term isn't being misused or used loosely.
如果所讨论的语言将事物称为“数组”,而这些事物在通常的“连续内存块”意义上并不是真正的数组,那么它可以。(JavaScript 这样做;它的Array
( []
) 类型实际上是一个映射;PHP 使用术语“数组”作为“关联数组”的简写 [例如,映射]。)因此对于给定的环境/语言,值得检查该术语是否为不会被滥用或松散使用。
回答by Adam Arold
Array lookup is alwaysO(1)
. It does not depend on the size of the array. The basic idea about arrays is that it contains objects/references with fixed size so you can just do size * index
to have the position of the object you are looking for.
数组查找总是O(1)
. 它不依赖于数组的大小。关于数组的基本思想是它包含具有固定大小的对象/引用,因此您只需size * index
拥有您正在寻找的对象的位置即可。
So it is not like a LinkedList
(which is O(n)
nor a HashMap
which is O(1)
amortized.
所以它不像 a LinkedList
(which is O(n)
or a HashMap
which is O(1)
amortized.
I think this is the case in most languages. An exception might be javascript so make sure you check the documentation for the language you are using.
我认为大多数语言都是这种情况。javascript 可能是一个例外,因此请确保检查所用语言的文档。
回答by Tim B
Accessing an element in an array is constant time (it just calculates an address offset). This behavior is consistent for all the languages you listed. Although it should not be assumed for all languages, it will apply to most.
访问数组中的元素是常数时间(它只是计算地址偏移量)。此行为对于您列出的所有语言都是一致的。虽然不应该假设它适用于所有语言,但它适用于大多数语言。
There are some complexities in terms of cache miss/hit, pipelines etc but essentially its constant time.
在缓存未命中/命中、管道等方面存在一些复杂性,但基本上是恒定时间。
This is not the case though for List. Some List implementations give different performance characteristics for different operations.
但对于 List 而言,情况并非如此。一些 List 实现为不同的操作提供不同的性能特征。
To expand on the complexities:
扩展复杂性:
The question was "will large arrays get slower access". The correct answer is "yes".
问题是“大数组的访问速度会变慢吗”。正确答案是“是”。
It will stay O(1) in terms of Order of the access, but the actual access could potentially take considerably longer. For example it will become slower if the size of the array causes you to get cache misses (so the data needs fetching from main memory to the processor's cache) and/or memory paging issues (so the data needs fetching from disk), although that is a property of any large data set not specifically of arrays.
就访问顺序而言,它将保持 O(1),但实际访问可能需要更长的时间。例如,如果数组的大小导致缓存未命中(因此数据需要从主内存获取到处理器的缓存)和/或内存分页问题(因此数据需要从磁盘获取),它会变得更慢,尽管那样是任何大型数据集的属性,而不是数组的属性。
For most cases the difference will not be worth worrying about. We are talking fairly heavy optimization before you start worrying about things like cache misses. However it is worth being aware of these things as this question illustrates:
在大多数情况下,这种差异并不值得担心。在您开始担心缓存未命中之类的事情之前,我们正在谈论相当重的优化。但是,正如这个问题所说明的那样,值得了解这些事情:
Why is it faster to process a sorted array than an unsorted array?
A seemingly irrelevant detail (pre-sorting of an array) on code that on the face of it should always take the same time ran five times as fast because of the detail of the way a processor works.
由于处理器工作方式的细节,代码上看似无关的细节(数组的预排序)从表面上看应该总是以相同的时间运行五倍。