Python n维数组的numpy二阶导数
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/31206443/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
numpy second derivative of a ndimensional array
提问by Faultier
I have a set of simulation data where I would like to find the lowest slope in n dimensions. The spacing of the data is constant along each dimension, but not all the same (I could change that for the sake of simplicity).
我有一组模拟数据,我想在其中找到 n 维中的最低斜率。数据的间距沿每个维度都是恒定的,但并不完全相同(为了简单起见,我可以更改它)。
I can live with some numerical inaccuracy, especially towards the edges. I would heavily prefer not to generate a spline and use that derivative; just on the raw values would be sufficient.
我可以忍受一些数字不准确,尤其是边缘。我非常不想生成样条并使用该导数;仅在原始值上就足够了。
It is possible to calculate the first derivative with numpy
using the numpy.gradient()
function.
可以numpy
使用该numpy.gradient()
函数计算一阶导数。
import numpy as np
data = np.random.rand(30,50,40,20)
first_derivative = np.gradient(data)
# second_derivative = ??? <--- there be kudos (:
This is a comment regarding laplace versus the hessian matrix; this is no more a question but is meant to help understanding of future readers.
这是关于拉普拉斯与黑森矩阵的评论;这不再是一个问题,而是旨在帮助理解未来的读者。
I use as a testcase a 2D function to determine the 'flattest' area below a threshold. The following pictures show the difference in results between using the minimum of second_derivative_abs = np.abs(laplace(data))
and the minimum of the following:
我使用 2D 函数作为测试用例来确定阈值以下的“最平坦”区域。下图显示了使用以下各项中的最小值second_derivative_abs = np.abs(laplace(data))
和最小值之间的结果差异:
second_derivative_abs = np.zeros(data.shape)
hess = hessian(data)
# based on the function description; would [-1] be more appropriate?
for i in hess[0]: # calculate a norm
for j in i[0]:
second_derivative_abs += j*j
The color scale depicts the functions values, the arrows depict the first derivative (gradient), the red dot the point closest to zero and the red line the threshold.
色标描述函数值,箭头描述一阶导数(梯度),红点表示最接近零的点,红线表示阈值。
The generator function for the data was ( 1-np.exp(-10*xi**2 - yi**2) )/100.0
with xi, yi being generated with np.meshgrid
.
数据的生成器函数是( 1-np.exp(-10*xi**2 - yi**2) )/100.0
xi,yi 是用 生成的np.meshgrid
。
Laplace:
拉普拉斯:
Hessian:
黑森州:
采纳答案by rth
The second derivatives are given by the Hessian matrix. Here is a Python implementation for ND arrays, that consists in applying the np.gradient
twice and storing the output appropriately,
二阶导数由Hessian 矩阵给出。这是 ND 数组的 Python 实现,包括应用np.gradient
两次并适当地存储输出,
import numpy as np
def hessian(x):
"""
Calculate the hessian matrix with finite differences
Parameters:
- x : ndarray
Returns:
an array of shape (x.dim, x.ndim) + x.shape
where the array[i, j, ...] corresponds to the second derivative x_ij
"""
x_grad = np.gradient(x)
hessian = np.empty((x.ndim, x.ndim) + x.shape, dtype=x.dtype)
for k, grad_k in enumerate(x_grad):
# iterate over dimensions
# apply gradient again to every component of the first derivative.
tmp_grad = np.gradient(grad_k)
for l, grad_kl in enumerate(tmp_grad):
hessian[k, l, :, :] = grad_kl
return hessian
x = np.random.randn(100, 100, 100)
hessian(x)
Note that if you are only interested in the magnitude of the second derivatives, you could use the Laplace operatorimplemented by scipy.ndimage.filters.laplace
, which is the trace (sum of diagonal elements) of the Hessian matrix.
请注意,如果您只对二阶导数的大小感兴趣,则可以使用由实现 的拉普拉斯算子scipy.ndimage.filters.laplace
,它是 Hessian 矩阵的迹(对角线元素的总和)。
Taking the smallest element of the the Hessian matrix could be used to estimate the lowest slope in any spatial direction.
取 Hessian 矩阵的最小元素可用于估计任何空间方向的最低斜率。
回答by SENHAJI RHAZI Hamza
You can see the Hessian Matrix as a gradient of gradient, where you apply gradient a second time for each component of the first gradient calculated here is a wikipedia linkdefinig Hessian matrix and you can see clearly that is a gradient of gradient, here is a python implementation defining gradient then hessian :
您可以将 Hessian 矩阵视为梯度的梯度,在此处为第一个梯度的每个分量第二次应用梯度,此处是定义 Hessian 矩阵的维基百科链接,您可以清楚地看到这是梯度的梯度,这里是python实现定义梯度然后hessian:
import numpy as np
#Gradient Function
def gradient_f(x, f):
assert (x.shape[0] >= x.shape[1]), "the vector should be a column vector"
x = x.astype(float)
N = x.shape[0]
gradient = []
for i in range(N):
eps = abs(x[i]) * np.finfo(np.float32).eps
xx0 = 1. * x[i]
f0 = f(x)
x[i] = x[i] + eps
f1 = f(x)
gradient.append(np.asscalar(np.array([f1 - f0]))/eps)
x[i] = xx0
return np.array(gradient).reshape(x.shape)
#Hessian Matrix
def hessian (x, the_func):
N = x.shape[0]
hessian = np.zeros((N,N))
gd_0 = gradient_f( x, the_func)
eps = np.linalg.norm(gd_0) * np.finfo(np.float32).eps
for i in range(N):
xx0 = 1.*x[i]
x[i] = xx0 + eps
gd_1 = gradient_f(x, the_func)
hessian[:,i] = ((gd_1 - gd_0)/eps).reshape(x.shape[0])
x[i] =xx0
return hessian
As a test, the Hessian matrix of (x^2 + y^2) is 2 * I_2where I_2 is the identity matrix of dimension 2
作为测试,(x^2 + y^2) 的 Hessian 矩阵是2 * I_2其中 I_2 是维度 2 的单位矩阵
回答by denis
Slopes, Hessians and Laplacians are related, but are 3 different things.
Start with 2d: a function( x, y ) of 2 variables, e.g. a height map of a range of hills,
Slopes、Hessians 和 Laplacians 是相关的,但它们是 3 种不同的东西。
从 2d 开始:2 个变量的函数 (x, y),例如一系列山丘的高度图,
slopes aka gradients are direction vectors, a direction and length at each point
x y
.
This can be given by 2 numbersdx dy
in cartesian coordinates, or an angle θ and lengthsqrt( dx^2 + dy^2 )
in polar coordinates. Over a whole range of hills, we get a vector field.Hessians describe curvature near
x y
, e.g. a paraboloid or a saddle, with 4 numbers:dxx dxy dyx dyy
.a Laplacian is 1 number,
dxx + dyy
, at each pointx y
. Over a range of hills, we get a scalar field. (Functions or hills with Laplacian = 0are particularly smooth.)
斜率又名梯度是方向向量,每个点的方向和长度
x y
。
这可以由dx dy
笛卡尔坐标中的 2 个数字或sqrt( dx^2 + dy^2 )
极坐标中的角度 θ 和长度给出。在整个山丘范围内,我们得到一个 向量场。Hessians
x y
用 4 个数字描述接近 的曲率,例如抛物面或马鞍形:dxx dxy dyx dyy
。拉普拉斯算子是 1 个数字,
dxx + dyy
,在每个点x y
。在一系列山丘上,我们得到一个 标量场。(Laplacian = 0 的函数或山丘 特别平滑。)
Slopes are linear fits and Hessians quadratic fits, for tiny steps h
near a point xy
:
斜率是线性拟合和 Hessians 二次拟合,适用h
于点附近的微小步长xy
:
f(xy + h) ~ f(xy)
+ slope . h -- dot product, linear in both slope and h
+ h' H h / 2 -- quadratic in h
Here xy
, slope
and h
are vectors of 2 numbers,
and H
is a matrix of 4 numbers dxx dxy dyx dyy
.
这里xy
,slope
和h
是 2 个数字的向量,H
是 4 个数字的矩阵dxx dxy dyx dyy
。
N-d is similar: slopes are direction vectors of N numbers, Hessians are matrices of N^2 numbers, and Laplacians 1 number, at each point.
Nd 是类似的:斜率是 N 个数字的方向向量,Hessians 是 N^2 个数字的矩阵,而 Laplacians 是 1 个数字,在每个点。
(You might find better answers over on math.stackexchange.)
(您可能会在math.stackexchange上找到更好的答案 。)