Python 如何在有约束的 scipy 中使用最小化函数
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/18767657/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How do I use a minimization function in scipy with constraints
提问by anand
I need some help regarding optimisation functions in python(scipy)
the problem is optimizing f(x)
where x=[a,b,c...n]
. the constraints are that values of a,b etc should be between 0 and 1, and sum(x)==1
. The scipy.optimise.minimize function seems best as it requires no differential. How do I pass the arguments?
我需要一些有关 python(scipy) 中优化函数的帮助,问题是优化f(x)
where x=[a,b,c...n]
。约束是 a、b 等的值应介于 0 和 1 之间,并且sum(x)==1
. scipy.optimise.minimize 函数似乎最好,因为它不需要差分。我如何传递参数?
Creating an ndarray using permutation is too long. My present code as below:-
使用排列创建 ndarray 太长了。我现在的代码如下:-
import itertools as iter
all=iter.permutations([0.0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1.0],6) if sum==1
all_legal=[]
for i in all:
if np.sum(i)==1:
#print np.sum(i)
all_legal.append(i)
print len(all_legal)
lmax=0
sharpeMax=0
for i in all_legal:
if sharpeMax<getSharpe(i):
sharpeMax=getSharpe(i)
lmax=i
回答by CT Zhu
Check .minimize
docstring:
检查.minimize
文档字符串:
scipy.optimize.minimize(fun, x0, args=(), method='BFGS', jac=None, hess=None, hessp=None, \
bounds=None, constraints=(), tol=None, callback=None, options=None)
What matters the most in your case will be the bounds
. When you want to constrain your parameter in [0,1] (or (0,1)?) You need to define it for each variable, such as:
在您的情况下最重要的是bounds
. 当你想将你的参数限制在 [0,1](或(0,1)?)中时,你需要为每个变量定义它,例如:
bounds=((0,1), (0,1).....)
Now, the other part, sum(x)==1
. There may be more elegant ways to do it, but consider this: instead of minimizing f(x)
, you minimize h=lambda x: f(x)+g(x)
, a new function essential f(x)+g(x)
where g(x)
is a function reaches it minimum when sum(x)=1
. Such as g=lambda x: (sum(x)-1)**2
.
现在,另一部分,sum(x)==1
。可能有更优雅的方法来做到这一点,但请考虑这一点:不是最小化f(x)
,而是最小化h=lambda x: f(x)+g(x)
,这是一个必不可少的新函数f(x)+g(x)
,其中g(x)
函数在 时达到最小值sum(x)=1
。比如g=lambda x: (sum(x)-1)**2
。
The minimum of h(x)
is reached when both f(x)
and g(x)
are at their minimum. Sort of a case of Lagrange multiplier method http://en.wikipedia.org/wiki/Lagrange_multiplier
最小h(x)
达到当两个f(x)
和g(x)
处于最小。拉格朗日乘数法的一个例子http://en.wikipedia.org/wiki/Lagrange_multiplier
回答by Daniel
You can do a constrained optimization with COBYLA
or SLSQP
as it says in the docs.
您可以使用COBYLA
或SLSQP
按照文档中的说明进行约束优化。
from scipy.optimize import minimize
start_pos = np.ones(6)*(1/6.) #or whatever
#Says one minus the sum of all variables must be zero
cons = ({'type': 'eq', 'fun': lambda x: 1 - sum(x)})
#Required to have non negative values
bnds = tuple((0,1) for x in start_pos)
Combine these into the minimization function.
将这些组合到最小化函数中。
res = minimize(getSharpe, start_pos, method='SLSQP', bounds=bnds ,constraints=cons)