如何计算Python函数的算法复杂度?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/19556765/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to calculate the algorithmic complexity of Python functions?
提问by kicker86
回答by atomicinf
In general, there's no way to do this programmatically (you run into the halting problem).
通常,没有办法以编程方式执行此操作(您会遇到停机问题)。
If you have no idea where to start, you can gain some insight into how a function will perform by running some benchmarks (e.g. using the time
module) with inputs of various sizes. You can even collect enough data to form a suspicionabout what the runtime might be. But this won't give you a rigorous answer - for that, you need to provemathematically that your suspected bound is in fact true.
如果您不知道从哪里开始,您可以通过使用time
各种大小的输入运行一些基准测试(例如使用模块)来深入了解函数的执行方式。您甚至可以收集足够的数据来怀疑运行时可能是什么。但这不会给你一个严格的答案 - 为此,你需要从数学上证明你怀疑的界限实际上是真的。
For instance, if I'm playing with a sorting function and observe that the time is increasing roughly proportionally to the square of the input size, I might suspectthat the complexity of this sort is O(n**2)
. But this does not constitute proof - in particular, some algorithms that perform well under typical inputs have pathological inputs that result in very poor performance.
例如,如果我正在使用排序函数并观察到时间与输入大小的平方大致成比例地增加,我可能会怀疑这种排序的复杂度为O(n**2)
. 但这并不构成证据——特别是,一些在典型输入下表现良好的算法具有导致非常差的性能的病态输入。
To prove that the bound is in fact O(n**2)
, I need to look at what the algorithm is doing in the worst case - in this example, I might be analysing a selection sort, which repeatedly sweeps across the entire unsorted portion of the list and picks the lowest unsorted number. It should be evident that I'm examining something like n*(n-1) == O(n**2)
elements. If examining elements is a constant-time operation, and placing the final element in the correct place is also not worse than O(n**2)
, then it follows that my entire algorithm is O(n**2)
.
为了证明这个界限是事实O(n**2)
,我需要看看算法在最坏的情况下做了什么——在这个例子中,我可能正在分析一个选择排序,它反复扫描列表的整个未排序部分并选择最低的未排序数字。很明显,我正在检查n*(n-1) == O(n**2)
元素之类的东西。如果检查元素是一个恒定时间的操作,并且将最后一个元素放在正确的位置也不比 差O(n**2)
,那么我的整个算法就是O(n**2)
。
回答by Rami
Take a look at the big O notation for various python operations here:
在这里查看各种 python 操作的大 O 符号:
https://wiki.python.org/moin/TimeComplexity
https://wiki.python.org/moin/TimeComplexity
This is a good refresher for the college stuff as well:
这对大学的东西也是一个很好的复习:
http://www.nikhilgopal.com/2012/04/refresher-on-big-o-notation-python.html
http://www.nikhilgopal.com/2012/04/refresher-on-big-o-notation-python.html
And finally a good concrete example here:
最后是一个很好的具体例子:
回答by Dylan
If you're trying to get the big O notation for your own functions, you probably need variables keeping track of things like: the runTime; the number of comparisons; the number of iterations; etc. As well as some calculation investigating how these correspond to the size of your data. It's probably best to do this manually first, so you can check your understanding of an algorithm.
如果您想为自己的函数获得大 O 表示法,您可能需要跟踪以下内容的变量:比较次数;迭代次数;等等,以及一些调查这些如何对应于您的数据大小的计算。最好先手动执行此操作,以便检查您对算法的理解。