Python numpy.gradient 有什么作用?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/24633618/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 04:54:46  来源:igfitidea点击:

What does numpy.gradient do?

pythonmathnumpy

提问by usual me

So I know what the gradient of a (mathematical) function is, so I feel like I should know what numpy.gradientdoes. But I don't. The documentationis not really helpful either:

所以我知道(数学)函数的梯度是什么,所以我觉得我应该知道什么numpy.gradient是。但我没有。该文档也不是很有帮助:

Return the gradient of an N-dimensional array.

返回 N 维数组的梯度。

What is the gradient of an array? When is numpy.gradientuseful?

什么是数组的梯度?什么时候numpy.gradient有用?

采纳答案by 4pie0

The gradient is computed using central differences in the interior and first differences at the boundaries.

梯度是使用内部的中心差和边界的一阶差来计算的。

and

The default distance is 1

默认距离为 1

This meansthat in the interior it is computed as

意味着在内部它被计算为

enter image description here

在此处输入图片说明

where h = 1.0

其中 h = 1.0

and at the boundaries

并且在边界

enter image description here

在此处输入图片说明

回答by SiHa

Also in the documentation1:

同样在文档1 中

>>> y = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
>>> j = np.gradient(y)
>>> j 
array([ 1. ,  1.5,  2.5,  3.5,  4.5,  5. ])
  • Gradient is defined as (change in y)/(change in x).
  • x, here, is the index, so the difference between adjacent values is 1.

  • At the boundaries, the first difference is calculated. This means that at each end of the array, the gradient given is simply, the difference between the end two values (divided by 1)

  • Away from the boundaries the gradient for a particular index is given by taking the difference between the the values either side and dividing by 2.
  • 梯度定义为(变化y)/(变化x)。
  • x,这里是索引,所以相邻值之间的差为1。

  • 在边界处,计算第一个差异。这意味着在数组的每一端,给定的梯度很简单,就是最后两个值之间的差值(除以 1)

  • 远离边界,特定索引的梯度是通过取任一侧的值之间的差异并除以 2 来给出的。

So, the gradient of y, above, is calculated thus:

因此,y上面的梯度是这样计算的:

j[0] = (y[1]-y[0])/1 = (2-1)/1  = 1
j[1] = (y[2]-y[0])/2 = (4-1)/2  = 1.5
j[2] = (y[3]-y[1])/2 = (7-2)/2  = 2.5
j[3] = (y[4]-y[2])/2 = (11-4)/2 = 3.5
j[4] = (y[5]-y[3])/2 = (16-7)/2 = 4.5
j[5] = (y[5]-y[4])/1 = (16-11)/1 = 5

You could find the minima of all the absolute values in the resulting array to find the turning points of a curve, for example.

例如,您可以在结果数组中找到所有绝对值的最小值,以找到曲线的转折点。



1The array is actually called xin the example in the docs, I've changed it to yto avoid confusion.

1该数组实际上x在文档的示例中被调用,我已将其更改为y以避免混淆。

回答by Robert Zaremba

Think about N-dimensional array as a matrix. Then gradient is nothing else as matrix differentiation

将 N 维数组视为矩阵。那么梯度就是矩阵微分

For a good explanation look at gradientdescription in matlab documentation.

有关良好解释,请查看matlab 文档中的梯度描述。

回答by Robert McLean MD PhD

Here is what is going on. The Taylor series expansion guides us on how to approximate the derivative, given the value at close points. The simplest comes from the first order Taylor series expansion for a C^2 function (two continuous derivatives)...

这是发生了什么。给定接近点的值,泰勒级数展开式指导我们如何近似导数。最简单的来自 C^2 函数(两个连续导数)的一阶泰勒级数展开...

  • f(x+h) = f(x) + f'(x)h+f''(xi)h^2/2.
  • f(x+h) = f(x) + f'(x)h+f''(xi)h^2/2。

One can solve for f'(x)...

可以求解 f'(x)...

  • f'(x) = [f(x+h) - f(x)]/h + O(h).
  • f'(x) = [f(x+h) - f(x)]/h + O(h)。

Can we do better? Yes indeed. If we assume C^3, then the Taylor expansion is

我们能做得更好吗?确实是的。如果我们假设 C^3,那么泰勒展开式是

  • f(x+h) = f(x) + f'(x)h + f''(x)h^2/2 + f'''(xi) h^3/6, and
  • f(x-h) = f(x) - f'(x)h + f''(x)h^2/2 - f'''(xi) h^3/6.
  • f(x+h) = f(x) + f'(x)h + f''(x)h^2/2 + f'''(xi) h^3/6,和
  • f(xh) = f(x) - f'(x)h + f''(x)h^2/2 - f'''(xi) h^3/6。

Subtracting these (both the h^0 and h^2 terms drop out!) and solve for f'(x):

减去这些(h^0 和 h^2 项都去掉!)并求解 f'(x):

  • f'(x) = [f(x+h) - f(x-h)]/(2h) + O(h^2).
  • f'(x) = [f(x+h) - f(xh)]/(2h) + O(h^2)。

So, if we have a discretized function defined on equal distant partitions: x = x_0,x_0+h(=x_1),....,x_n=x_0+h*n, then numpy gradient will yield a "derivative" array using the first order estimate on the ends and the better estimates in the middle.

因此,如果我们在等距分区上定义了一个离散化函数:x = x_0,x_0+h(=x_1),....,x_n=x_0+h*n,那么 numpy 梯度将使用末端的一阶估计和中间的更好的估计。

Example 1.If you don't specify any spacing, the interval is assumed to be 1. so if you call

示例 1.如果不指定任何间距,则假定间隔为 1。因此,如果您调用

f = np.array([5, 7, 4, 8])

what you are saying is that f(0) = 5, f(1) = 7, f(2) = 4, and f(3) = 8. Then

你说的是 f(0) = 5, f(1) = 7, f(2) = 4, and f(3) = 8. 然后

np.gradient(f) 

will be: f'(0) = (7 - 5)/1 = 2, f'(1) = (4 - 5)/(2*1) = -0.5, f'(2) = (8 - 7)/(2*1) = 0.5, f'(3) = (8 - 4)/1 = 4.

将是: f'(0) = (7 - 5)/1 = 2, f'(1) = (4 - 5)/(2*1) = -0.5, f'(2) = (8 - 7 )/(2*1) = 0.5,f'(3) = (8 - 4)/1 = 4。

Example 2.If you specify a single spacing, the spacing is uniform but not 1.

示例 2.如果指定单个间距,则间距是统一的但不是 1。

For example, if you call

例如,如果您调用

np.gradient(f, 0.5)

this is saying that h = 0.5, not 1, i.e., the function is really f(0) = 5, f(0.5) = 7, f(1.0) = 4, f(1.5) = 8. The net effect is to replace h = 1 with h = 0.5 and all the results will be doubled.

这就是说 h = 0.5,而不是 1,即函数实际上是 f(0) = 5, f(0.5) = 7, f(1.0) = 4, f(1.5) = 8。净效果是将 h = 1 替换为 h = 0.5,所有结果都会翻倍。

Example 3.Suppose the discretized function f(x) is not defined on uniformly spaced intervals, for instance f(0) = 5, f(1) = 7, f(3) = 4, f(3.5) = 8, then there is a messier discretized differentiation function that the numpy gradient function uses and you will get the discretized derivatives by calling

例 3.假设离散函数 f(x) 没有定义在均匀间隔的区间上,例如 f(0) = 5, f(1) = 7, f(3) = 4, f(3.5) = 8, 那么numpy 梯度函数使用了一个更混乱的离散化微分函数,您将通过调用获得离散化导数

np.gradient(f, np.array([0,1,3,3.5]))

Lastly, if your input is a 2d array, then you are thinking of a function f of x, y defined on a grid. The numpy gradient will output the arrays of "discretized" partial derivatives in x and y.

最后,如果您的输入是一个二维数组,那么您正在考虑在网格上定义的 x, y 函数 f。numpy 梯度将输出 x 和 y 中“离散化”偏导数的数组。