C语言 递归算法的时间复杂度

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/2709106/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-02 05:17:19  来源:igfitidea点击:

Time complexity of a recursive algorithm

calgorithmtime-complexity

提问by Passionate Learner

How can I calculate the time complexity of a recursive algorithm?

如何计算递归算法的时间复杂度?

int pow1(int x,int n) {
    if(n==0){
        return 1;
    }
    else{
        return x * pow1(x, n-1);
    }
}

int pow2(int x,int n) {
    if(n==0){
        return 1;
    }
    else if(n&1){
        int p = pow2(x, (n-1)/2)
        return x * p * p;
    }
    else {
        int p = pow2(x, n/2)
        return p * p;
    }
}

回答by phimuemue

Analyzing recursive functions (or even evaluating them) is a nontrivial task. A (in my opinion) good introduction can be found in Don Knuths Concrete Mathematics.

分析递归函数(甚至评估它们)是一项非常重要的任务。可以在 Don Knuths Concrete Mathematics 中找到(在我看来)很好的介绍。

However, let's analyse these examples now:

但是,现在让我们分析这些示例:

We define a function that gives us the time needed by a function. Let's say that t(n)denotes the time needed by pow(x,n), i.e. a function of n.

我们定义了一个函数,它为我们提供了一个函数所需的时间。假设t(n)表示 所需的时间pow(x,n),即 的函数n

Then we can conclude, that t(0)=c, because if we call pow(x,0), we have to check whether (n==0), and then return 1, which can be done in constant time (hence the constant c).

然后我们可以得出结论,t(0)=c因为如果我们调用pow(x,0),我们必须检查是否 ( n==0),然后返回 1,这可以在常数时间内完成(因此常数c)。

Now we consider the other case: n>0. Here we obtain t(n) = d + t(n-1). That's because we have again to check n==1, compute pow(x, n-1, hence (t(n-1)), and multiply the result by x. Checking and multiplying can be done in constant time (constant d), the recursive calculation of powneeds t(n-1).

现在我们考虑另一种情况:n>0. 在这里我们得到t(n) = d + t(n-1)。那是因为我们必须再次检查n==1、计算pow(x, n-1,因此 ( t(n-1)) 并将结果乘以x。校验和乘法可以在常数时间(constant d)内完成,pow需要递归计算t(n-1)

Now we can "expand" the term t(n):

现在我们可以“扩展”这个词t(n)

t(n) =
d + t(n-1) = 
d + (d + t(n-2)) = 
d + d + t(n-2) = 
d + d + d + t(n-3) =
... =
d + d + d + ... + t(1) =
d + d + d + ... + c

So, how long does it take until we reach t(1)? Since we start at t(n)and we subtract 1 in each step, it takes n-1steps to reach t(n-(n-1)) = t(1). That, on the other hands, means, that we get n-1times the constant d, and t(1)is evaluated to c.

那么,我们需要多长时间才能到达t(1)?由于我们从 at 开始t(n)并在每一步中减去 1,因此需要n-1步数才能到达t(n-(n-1)) = t(1)。另一方面,这意味着我们得到n-1乘以常数d,并t(1)计算为c

So we obtain:

于是我们得到:

t(n) =
...
d + d + d + ... + c =
(n-1) * d + c

So we get that t(n)=(n-1) * d + cwhich is element of O(n).

所以我们得到t(n)=(n-1) * d + c了 O(n) 的元素。

pow2can be done using Masters theorem. Since we can assume that time functions for algorithms are monotonically increasing. So now we have the time t(n)needed for the computation of pow2(x,n):

pow2可以使用Masters 定理来完成。因为我们可以假设算法的时间函数是单调递增的。所以现在我们有t(n)计算 所需的时间pow2(x,n)

t(0) = c (since constant time needed for computation of pow(x,0))

for n>0we get

因为n>0我们得到

        / t((n-1)/2) + d if n is odd  (d is constant cost)
t(n) = <
        \ t(n/2) + d     if n is even (d is constant cost)

The above can be "simplified" to:

以上可以“简化”为:

t(n) = floor(t(n/2)) + d <= t(n/2) + d (since t is monotonically increasing)

So we obtain t(n) <= t(n/2) + d, which can be solved using the masters theorem to t(n) = O(log n)(see section Application to Popular Algorithms in the wikipedia link, example "Binary Search").

所以我们得到t(n) <= t(n/2) + d,这可以使用大师定理来解决t(n) = O(log n)(参见维基百科链接中的流行算法部分,例如“二分搜索”)。

回答by Michael Madsen

Let's just start with pow1, because that's the simplest one.

让我们从 pow1 开始,因为这是最简单的。

You have a function where a single run is done in O(1). (Condition checking, returning, and multiplication are constant time.)

您有一个在 O(1) 中完成单次运行的函数。(条件检查、返回和乘法是常数时间。)

What you have left is then your recursion. What you need to do is analyze how often the function would end up calling itself. In pow1, it'll happen N times. N*O(1)=O(N).

你剩下的就是你的递归。您需要做的是分析函数最终调用自身的频率。在 pow1 中,它会发生 N 次。N*O(1)=O(N)。

For pow2, it's the same principle - a single run of the function runs in O(1). However, this time you're halving N every time. That means it will run log2(N) times - effectively once per bit. log2(N)*O(1)=O(log(N)).

对于 pow2,原理相同——函数的单次运行在 O(1) 中运行。但是,这次您每次都将 N 减半。这意味着它将运行 log2(N) 次——每比特有效一次。log2(N)*O(1)=O(log(N))。

Something which might help you is to exploit the fact that recursion can always be expressed as iteration (not always very simply, but it's possible. We can express pow1 as

可以帮助您的是利用递归始终可以表示为迭代的事实(并不总是很简单,但它是可能的。我们可以将 pow1 表示为

result = 1;
while(n != 0)
{
  result = result*n;
  n = n - 1;
}

Now you have an iterative algorithm instead, and you might find it easier to analyze it that way.

现在你有了一个迭代算法,你可能会发现用这种方式分析它更容易。

回答by zdav

It can be a bit complex, but I think the usual way is to use Master's theorem.

它可能有点复杂,但我认为通常的方法是使用Master's theorem

回答by zdav

Complexity of both functions ignoring recursion is O(1)

忽略递归的两个函数的复杂度为 O(1)

For the first algorithm pow1(x, n) complexity is O(n) because the depth of recursion correlates with n linearly.

对于第一个算法 pow1(x, n) 复杂度是 O(n),因为递归深度与 n 线性相关。

For the second complexity is O(log n). Here we recurse approximately log2(n) times. Throwing out 2 we get log n.

对于第二个复杂度是 O(log n)。在这里,我们递归了大约 log2(n) 次。抛出 2 我们得到 log n。

回答by MattyW

So I'm guessing you're raising x to the power n. pow1 takes O(n).

所以我猜你是把 x 提高到 n 次方。pow1 需要 O(n)。

You never change the value of x but you take 1 from n each time until it gets to 1 (and you then just return) This means that you will make a recursive call n times.

您永远不会更改 x 的值,但每次都从 n 中取 1,直到它变为 1(然后您才返回)这意味着您将进行 n 次递归调用。