JavaScript 数组的大 O
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/11514308/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Big O of JavaScript arrays
提问by Kendall Frey
Arrays in JavaScript are very easy to modify by adding and removing items. It somewhat masks the fact that most languages array's are fixed-size, and require complex operations to resize. It seems that JavaScript makes it easy to write poorly performing array code. This leads to the question:
JavaScript 中的数组很容易通过添加和删除项目来修改。它在某种程度上掩盖了大多数语言数组是固定大小的事实,并且需要复杂的操作来调整大小。似乎 JavaScript 使得编写性能不佳的数组代码变得容易。这就引出了一个问题:
What performance (in terms of big O time complexity) can I expect from JavaScript implementations in regards to array performance?
在数组性能方面,我可以从 JavaScript 实现中期望什么性能(就大 O 时间复杂度而言)?
I assume that all reasonable JavaScript implementations have at least the following big O's.
我假设所有合理的 JavaScript 实现至少有以下大 O。
- Access - O(1)
- Appending - O(n)
- Prepending - O(n)
- Insertion - O(n)
- Deletion - O(n)
- Swapping - O(1)
- 访问 - O(1)
- 附加 - O(n)
- 前置 - O(n)
- 插入 - O(n)
- 删除 - O(n)
- 交换 - O(1)
JavaScript lets you pre-fill an array to a certain size, using new Array(length)
syntax. (Bonus question: Is creating an array in this manner O(1) or O(n)) This is more like a conventional array, and if used as a pre-sized array, can allow O(1) appending. If circular buffer logic is added, you can achieve O(1) prepending. If a dynamically expanding array is used, O(log n) will be the average case for both of those.
JavaScript 允许您使用new Array(length)
语法将数组预填充到特定大小。(额外问题:以这种方式创建数组是 O(1) 还是 O(n)) 这更像是一个传统的数组,如果用作预先调整大小的数组,可以允许 O(1) 追加。如果添加循环缓冲区逻辑,则可以实现 O(1) 前置。如果使用动态扩展数组,则 O(log n) 将是这两者的平均情况。
Can I expect better performance for some things than my assumptions here? I don't expect anything is outlined in any specifications, but in practice it could be that all major implementations use optimized arrays behind the scenes. Are there dynamically expanding arrays or some other performance boosting algorithms at work?
对于某些事情,我是否可以期望比我在这里的假设更好的性能?我不希望任何规范中概述任何内容,但实际上可能所有主要实现都在幕后使用优化的数组。是否有动态扩展数组或其他一些性能提升算法在起作用?
P.S.
聚苯乙烯
The reason I'm wondering this is because I'm researching some sorting algorithms, most of which seem to assume appending and deleting are O(1) operations when describing their overall big O.
我想知道这是因为我正在研究一些排序算法,其中大多数似乎在描述它们的整体大 O 时假设追加和删除是 O(1) 操作。
回答by Nick Johnson
NOTE: While this answer was correct in 2012, engines use very different internal representations for both objects and arrays today. This answer may or may not be true.
注意:虽然这个答案在 2012 年是正确的,但今天引擎对对象和数组使用非常不同的内部表示。这个答案可能是也可能不是。
In contrast to most languages, which implement arrays with, well, arrays, in Javascript Arrays are objects, and values are stored in a hashtable, just like regular object values. As such:
与使用数组实现数组的大多数语言相比,Javascript 中的数组是对象,并且值存储在哈希表中,就像常规对象值一样。像这样:
- Access - O(1)
- Appending - Amortized O(1) (sometimes resizing the hashtable is required; usually only insertion is required)
- Prepending - O(n) via
unshift
, since it requires reassigning all the indexes - Insertion - Amortized O(1) if the value does not exist. O(n) if you want to shift existing values (Eg, using
splice
). - Deletion - Amortized O(1) to remove a value, O(n) if you want to reassign indices via
splice
. - Swapping - O(1)
- 访问 - O(1)
- 附加 - 摊销 O(1)(有时需要调整哈希表的大小;通常只需要插入)
- 前置 - O(n) via
unshift
,因为它需要重新分配所有索引 - 插入 - 如果该值不存在,则摊销 O(1)。O(n) 如果您想移动现有值(例如,使用
splice
)。 - 删除 - 如果您想通过
splice
. - 交换 - O(1)
In general, setting or unsetting any key in a dict is amortized O(1), and the same goes for arrays, regardless of what the index is. Any operation that requires renumbering existing values is O(n) simply because you have to update all the affected values.
通常,设置或取消设置字典中的任何键的分摊时间为 O(1),数组也是如此,无论索引是什么。任何需要对现有值重新编号的操作都是 O(n),因为您必须更新所有受影响的值。
回答by Jonas Wilms
guarantee
保证
There is no specified time complexity guarantee for any array operation. How arrays perform depends on the underlying datastructure the engine chooses. Engines might also have different representations, and switch between them depending on certain heuristics. The initial array size might or might not be such an heuristic.
任何数组操作都没有指定的时间复杂度保证。数组的执行方式取决于引擎选择的底层数据结构。引擎也可能有不同的表示,并根据某些启发在它们之间切换。初始数组大小可能是也可能不是这样的启发式方法。
reality
现实
For example, V8 uses (as of today) both hashtablesand array liststo represent arrays. It also has various different representations for objects, so arrays and objects cannot be compared. Therefore array access is always better as O(n), and mighteven be as fast as a C++ array access. Appending is O(1), unless you reach the size of the datastructure and it has to be scaled (wich is O(n)). Prepending is worse. Deletion can be even more worse if you do something like delete array[index]
(don't!), as that might force the engine to change it's representation.
例如,V8 使用(截至今天)哈希表和数组列表来表示数组。它也有各种不同的对象表示,因此无法比较数组和对象。因此数组访问总是更好为O(n),并且可能甚至是一样快,一个C ++数组访问。追加是 O(1),除非您达到数据结构的大小并且必须对其进行缩放(即 O(n))。预先准备更糟。如果您执行类似delete array[index]
(不要!)的操作,删除可能会更糟,因为这可能会迫使引擎更改其表示形式。
advice
建议
Use arrays for numeric datastructures. That's what they are meant for. That's what engines will optimize them for. Avoid sparse arrays (or if you have to, expect worse performance). Avoid arrays with mixed datatypes (as that makes internal representations more complex).
将数组用于数值数据结构。这就是他们的目的。这就是引擎将优化它们的目的。避免使用稀疏数组(或者,如果必须的话,期望性能更差)。避免使用混合数据类型的数组(因为这会使内部表示更加复杂)。
If you really want to optimize for a certain engine (and version), check it's sourcecodefor the absolute answer.
如果您真的想针对某个引擎(和版本)进行优化,请查看它的源代码以获得绝对答案。