Python有限差分函数?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/18991408/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Python finite difference functions?
提问by Tim Supinie
I've been looking around in Numpy/Scipy for modules containing finite difference functions. However, the closest thing I've found is numpy.gradient()
, which is good for 1st-order finite differences of 2nd order accuracy, but not so much if you're wanting higher-order derivatives or more accurate methods. I haven't even found very many specific modules for this sort of thing; most people seem to be doing a "roll-your-own" thing as they need them. So my question is if anyone knows of any modules (either part of Numpy/Scipy or a third-party module) that are specifically dedicated to higher-order (both in accuracy and derivative) finite difference methods. I've got my own code that I'm working on, but it's currently kind of slow, and I'm not going to attempt to optimize it if there's something already available.
我一直在 Numpy/Scipy 中寻找包含有限差分函数的模块。但是,我发现的最接近的是numpy.gradient()
,这对二阶精度的一阶有限差分很有用,但如果您想要高阶导数或更准确的方法,则不是那么多。我什至没有找到很多用于此类事情的特定模块;大多数人似乎都在做他们需要的“自己动手”的事情。所以我的问题是,是否有人知道任何专门用于高阶(精度和导数)有限差分方法的模块(Numpy/Scipy 的一部分或第三方模块)。我有自己的代码正在处理,但目前有点慢,如果有的话我不会尝试优化它'
Note that I am talking about finite differences, not derivatives. I've seen both scipy.misc.derivative()
and Numdifftools, which take the derivative of an analytical function, which I don't have.
请注意,我说的是有限差分,而不是导数。我见过scipy.misc.derivative()
和Numdifftools,它们采用分析函数的导数,而我没有。
采纳答案by askewchan
One way to do this quickly is by convolution with the derivative of a gaussian kernel. The simple case is a convolution of your array with [-1, 1]
which gives exactly the simple finite difference formula. Beyond that, (f*g)'= f'*g = f*g'
where the *
is convolution, so you end up with your derivative convolved with a plain gaussian, so of course this will smooth your data a bit, which can be minimized by choosing the smallest reasonable kernel.
快速执行此操作的一种方法是与高斯核的导数进行卷积。简单的情况是数组的卷积,[-1, 1]
它给出了简单的有限差分公式。除此之外,(f*g)'= f'*g = f*g'
这里*
是卷积,所以你最终将你的导数与一个普通的高斯卷积,所以这当然会使你的数据平滑一点,这可以通过选择最小的合理内核来最小化。
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
#Data:
x = np.linspace(0,2*np.pi,100)
f = np.sin(x) + .02*(np.random.rand(100)-.5)
#Normalization:
dx = x[1] - x[0] # use np.diff(x) if x is not uniform
dxdx = dx**2
#First derivatives:
df = np.diff(f) / dx
cf = np.convolve(f, [1,-1]) / dx
gf = ndimage.gaussian_filter1d(f, sigma=1, order=1, mode='wrap') / dx
#Second derivatives:
ddf = np.diff(f, 2) / dxdx
ccf = np.convolve(f, [1, -2, 1]) / dxdx
ggf = ndimage.gaussian_filter1d(f, sigma=1, order=2, mode='wrap') / dxdx
#Plotting:
plt.figure()
plt.plot(x, f, 'k', lw=2, label='original')
plt.plot(x[:-1], df, 'r.', label='np.diff, 1')
plt.plot(x, cf[:-1], 'r--', label='np.convolve, [1,-1]')
plt.plot(x, gf, 'r', label='gaussian, 1')
plt.plot(x[:-2], ddf, 'g.', label='np.diff, 2')
plt.plot(x, ccf[:-2], 'g--', label='np.convolve, [1,-2,1]')
plt.plot(x, ggf, 'g', label='gaussian, 2')
Since you mentioned np.gradient
I assumed you had at least 2d arrays, so the following applies to that: This is built into the scipy.ndimage
package if you want to do it for ndarrays. Be cautious though, because of course this doesn't give you the full gradient but I believe the product of all directions. Someone with better expertise will hopefully speak up.
既然你提到np.gradient
我假设你至少有 2d 数组,所以以下适用scipy.ndimage
于此:如果你想为 ndarrays 做这件事,这将被内置到包中。不过要小心,因为这当然不会给你完整的梯度,但我相信所有方向的产物。有更好的专业知识的人希望能说出来。
Here's an example:
下面是一个例子:
from scipy import ndimage
x = np.linspace(0,2*np.pi,100)
sine = np.sin(x)
im = sine * sine[...,None]
d1 = ndimage.gaussian_filter(im, sigma=5, order=1, mode='wrap')
d2 = ndimage.gaussian_filter(im, sigma=5, order=2, mode='wrap')
plt.figure()
plt.subplot(131)
plt.imshow(im)
plt.title('original')
plt.subplot(132)
plt.imshow(d1)
plt.title('first derivative')
plt.subplot(133)
plt.imshow(d2)
plt.title('second derivative')
Use of the gaussian_filter1d
allows you to take a directional derivative along a certain axis:
使用gaussian_filter1d
允许您沿特定轴获取方向导数:
imx = im * x
d2_0 = ndimage.gaussian_filter1d(imx, axis=0, sigma=5, order=2, mode='wrap')
d2_1 = ndimage.gaussian_filter1d(imx, axis=1, sigma=5, order=2, mode='wrap')
plt.figure()
plt.subplot(131)
plt.imshow(imx)
plt.title('original')
plt.subplot(132)
plt.imshow(d2_0)
plt.title('derivative along axis 0')
plt.subplot(133)
plt.imshow(d2_1)
plt.title('along axis 1')
The first set of results above are a little confusing to me (peaks in the original show up as peaks in the second derivative when the curvature should point down). Without looking further into how the 2d version works, I can only really recomend the 1d version. If you want the magnitude, simply do something like:
上面的第一组结果让我有点困惑(当曲率应该向下时,原始中的峰显示为二阶导数中的峰)。没有进一步研究 2d 版本的工作原理,我只能真正推荐 1d 版本。如果您想要幅度,只需执行以下操作:
d2_mag = np.sqrt(d2_0**2 + d2_1**2)
回答by Kevin Johnson
Definitely like the answer given by askewchan. This is a great technique. However, if you need to use numpy.convolve
I wanted to offer this one little fix. Instead of doing:
绝对喜欢askewchan给出的答案。这是一项很棒的技术。但是,如果您需要使用,numpy.convolve
我想提供这个小修复。而不是做:
#First derivatives:
cf = np.convolve(f, [1,-1]) / dx
....
#Second derivatives:
ccf = np.convolve(f, [1, -2, 1]) / dxdx
...
plt.plot(x, cf[:-1], 'r--', label='np.convolve, [1,-1]')
plt.plot(x, ccf[:-2], 'g--', label='np.convolve, [1,-2,1]')
...use the 'same'
option in numpy.convolve
like this:
...像这样使用该'same'
选项numpy.convolve
:
#First derivatives:
cf = np.convolve(f, [1,-1],'same') / dx
...
#Second derivatives:
ccf = np.convolve(f, [1, -2, 1],'same') / dxdx
...
plt.plot(x, cf, 'rx', label='np.convolve, [1,-1]')
plt.plot(x, ccf, 'gx', label='np.convolve, [1,-2,1]')
...to avoid off-by-one index errors.
...避免一对一的索引错误。
Also be careful about the x-index when plotting. The points from the numy.diff
and numpy.convolve
must be the same! To fix the off-by-one errors (using my 'same'
code) use:
绘图时还要注意 x-index。从numy.diff
和的点数numpy.convolve
必须相同!要修复一对一错误(使用我的'same'
代码),请使用:
plt.plot(x, f, 'k', lw=2, label='original')
plt.plot(x[1:], df, 'r.', label='np.diff, 1')
plt.plot(x, cf, 'rx', label='np.convolve, [1,-1]')
plt.plot(x, gf, 'r', label='gaussian, 1')
plt.plot(x[1:-1], ddf, 'g.', label='np.diff, 2')
plt.plot(x, ccf, 'gx', label='np.convolve, [1,-2,1]')
plt.plot(x, ggf, 'g', label='gaussian, 2')
Edit corrected auto-complete with s/bot/by/g
使用 s/bot/by/g 编辑更正的自动完成
回答by lanery
Another approach is to differentiate an interpolation of the data. This was suggested by unutbu, but I did not see the method used here or in any of the linked questions. UnivariateSpline
, for example, from scipy.interpolate
has a useful built-in derivative method.
另一种方法是区分数据的插值。这是 unutbu 建议的,但我没有看到此处或任何链接问题中使用的方法。UnivariateSpline
例如, fromscipy.interpolate
有一个有用的内置导数方法。
import numpy as np
from scipy.interpolate import UnivariateSpline
import matplotlib.pyplot as plt
# data
n = 1000
x = np.linspace(0, 100, n)
y = 0.5 * np.cumsum(np.random.randn(n))
k = 5 # 5th degree spline
s = n - np.sqrt(2*n) # smoothing factor
spline_0 = UnivariateSpline(x, y, k=k, s=s)
spline_1 = UnivariateSpline(x, y, k=k, s=s).derivative(n=1)
spline_2 = UnivariateSpline(x, y, k=k, s=s).derivative(n=2)
# plot data, spline fit, and derivatives
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y, 'ko', ms=2, label='data')
ax.plot(x, spline_0(x), 'k', label='5th deg spline')
ax.plot(x, spline_1(x), 'r', label='1st order derivative')
ax.plot(x, spline_2(x), 'g', label='2nd order derivative')
ax.legend(loc='best')
ax.grid()
Note the zero-crossings of the first derivative (red curve) at the peaks and troughs of the spline fit (black curve).
注意样条拟合(黑色曲线)的波峰和波谷处的一阶导数(红色曲线)的零交叉。
回答by owelk
You may want to take a look at the findiff project. I have tried it out myself and it let's you conveniently take derivatives of numpy arrays of any dimension, any derivative order and any desired accuracy order. The project website says that it features:
您可能想看看findiff 项目。我自己尝试过,它让您可以方便地获取任何维度、任何导数顺序和任何所需精度顺序的 numpy 数组的导数。该项目网站说它具有以下特点:
- Differentiate arrays of any number of dimensions along any axis
- Partial derivatives of any desired order
- Standard operators from vector calculus like gradient, divergence and curl
- Can handle uniform and non-uniform grids
- Can handle arbitrary linear combinations of derivatives with constant and variable coefficients
- Accuracy order can be specified
- Fully vectorized for speed
- Calculate raw finite difference coefficients for any order and accuracy for uniform and non-uniform grids
- 沿任意轴微分任意维数的数组
- 任何所需顺序的偏导数
- 来自矢量微积分的标准算子,如梯度、散度和卷曲
- 可以处理均匀和非均匀网格
- 可以处理具有常数和可变系数的导数的任意线性组合
- 可以指定精度顺序
- 速度完全矢量化
- 计算均匀和非均匀网格的任何阶次和精度的原始有限差分系数