Python 如何显示 scipy.optimize 功能的进度?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/16739065/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 23:29:48  来源:igfitidea点击:

How to display progress of scipy.optimize function?

pythonnumpyscipyoutputmathematical-optimization

提问by Roman

I use scipy.optimizeto minimize a function of 12 arguments.

scipy.optimize用来最小化 12 个参数的函数。

I started the optimization a while ago and still waiting for results.

前段时间开始优化,还在等结果。

Is there a way to force scipy.optimizeto display its progress (like how much is already done, what are the current best point)?

有没有办法强制scipy.optimize显示它的进度(比如已经完成了多少,当前最好的点是什么)?

回答by Joel Vroom

As mg007 suggested, some of the scipy.optimize routines allow for a callback function (unfortunately leastsq does not permit this at the moment). Below is an example using the "fmin_bfgs" routine where I use a callback function to display the current value of the arguments and the value of the objective function at each iteration.

正如 mg007 所建议的,一些 scipy.optimize 例程允许回调函数(不幸的是,leastsq 目前不允许这样做)。下面是一个使用“fmin_bfgs”例程的示例,其中我使用回调函数来显示每次迭代时参数的当前值和目标函数的值。

import numpy as np
from scipy.optimize import fmin_bfgs

Nfeval = 1

def rosen(X): #Rosenbrock function
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

def callbackF(Xi):
    global Nfeval
    print '{0:4d}   {1: 3.6f}   {2: 3.6f}   {3: 3.6f}   {4: 3.6f}'.format(Nfeval, Xi[0], Xi[1], Xi[2], rosen(Xi))
    Nfeval += 1

print  '{0:4s}   {1:9s}   {2:9s}   {3:9s}   {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')   
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
    fmin_bfgs(rosen, 
              x0, 
              callback=callbackF, 
              maxiter=2000, 
              full_output=True, 
              retall=False)

The output looks like this:

输出如下所示:

Iter    X1          X2          X3         f(X)      
   1    1.031582    1.062553    1.130971    0.005550
   2    1.031100    1.063194    1.130732    0.004973
   3    1.027805    1.055917    1.114717    0.003927
   4    1.020343    1.040319    1.081299    0.002193
   5    1.005098    1.009236    1.016252    0.000739
   6    1.004867    1.009274    1.017836    0.000197
   7    1.001201    1.002372    1.004708    0.000007
   8    1.000124    1.000249    1.000483    0.000000
   9    0.999999    0.999999    0.999998    0.000000
  10    0.999997    0.999995    0.999989    0.000000
  11    0.999997    0.999995    0.999989    0.000000
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 11
         Function evaluations: 85
         Gradient evaluations: 17

At least this way you can watch as the optimizer tracks the minimum

至少通过这种方式您可以观察优化器跟踪最小值

回答by Bitwise

Which minimization function are you using exactly?

你到底在使用哪个最小化函数?

Most of the functions have progress report built, including multiple levels of reports showing exactly the data you want, by using the dispflag (for example see scipy.optimize.fmin_l_bfgs_b).

大多数函数都内置了进度报告,包括通过使用disp标志(例如,参见scipy.optimize.fmin_l_bfgs_b)来准确显示您想要的数据的多级报告。

回答by u5488085

Following @joel's example, there is a neat and efficient way to do the similar thing. Following example show how can we get rid of globalvariables, call_backfunctions and re-evaluating target function multiple times.

按照@joel 的例子,有一种简洁有效的方法来做类似的事情。下面的例子展示了我们如何摆脱global变量、call_back函数和多次重新评估目标函数

import numpy as np
from scipy.optimize import fmin_bfgs

def rosen(X, info): #Rosenbrock function
    res = (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2


    # display information
    if info['Nfeval']%100 == 0:
        print '{0:4d}   {1: 3.6f}   {2: 3.6f}   {3: 3.6f}   {4: 3.6f}'.format(info['Nfeval'], X[0], X[1], X[2], res)
    info['Nfeval'] += 1
    return res

print  '{0:4s}   {1:9s}   {2:9s}   {3:9s}   {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')   
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
    fmin_bfgs(rosen, 
              x0, 
              args=({'Nfeval':0},), 
              maxiter=1000, 
              full_output=True, 
              retall=False,
              )

This will generate output like

这将产生像

Iter    X1          X2          X3         f(X)     
   0    1.100000    1.100000    1.100000    2.440000
 100    1.000000    0.999999    0.999998    0.000000
 200    1.000000    0.999999    0.999998    0.000000
 300    1.000000    0.999999    0.999998    0.000000
 400    1.000000    0.999999    0.999998    0.000000
 500    1.000000    0.999999    0.999998    0.000000
Warning: Desired error not necessarily achieved due to precision loss.
         Current function value: 0.000000
         Iterations: 12
         Function evaluations: 502
         Gradient evaluations: 98

However, no free launch, here I used function evaluation timesinstead of algorithmic iteration timesas a counter. Some algorithms may evaluate target function multiple times in a single iteration.

但是,没有免费启动,这里我用function evaluation times代替algorithmic iteration times作为计数器。某些算法可能会在一次迭代中多次评估目标函数。

回答by Yan

Try using:

尝试使用:

options={'disp': True} 

to force scipy.optimize.minimizeto print intermediate results.

强制scipy.optimize.minimize打印中间结果。

回答by camo

Below is a solution that works for me :

以下是对我有用的解决方案:

def f_(x):   # The rosenbrock function
    return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2

def conjugate_gradient(x0, f):
    all_x_i = [x0[0]]
    all_y_i = [x0[1]]
    all_f_i = [f(x0)]
    def store(X):
        x, y = X
        all_x_i.append(x)
        all_y_i.append(y)
        all_f_i.append(f(X))
    optimize.minimize(f, x0, method="CG", callback=store, options={"gtol": 1e-12})
    return all_x_i, all_y_i, all_f_i

and by example :

并举例:

conjugate_gradient([2, -1], f_)

Source

来源

回答by Henri

Many of the optimizers in scipy indeed lack verbose output (the 'trust-constr'method of scipy.optimize.minimizebeing an exception). I faced a similar issue and solved it by creating a wrapper around the objective function and using the callback function. No additional function evaluations are performed here, so this should be an efficient solution.

scipy 中的许多优化器确实缺乏详细的输出(作为例外的“trust-constr”方法scipy.optimize.minimize)。我遇到了类似的问题,并通过在目标函数周围创建一个包装器并使用回调函数来解决它。这里没有执行额外的函数评估,所以这应该是一个有效的解决方案。

import numpy as np

class Simulator:
def __init__(self, function):
    self.f = function # actual objective function
    self.num_calls = 0 # how many times f has been called
    self.callback_count = 0 # number of times callback has been called, also measures iteration count
    self.list_calls_inp = [] # input of all calls
    self.list_calls_res = [] # result of all calls
    self.decreasing_list_calls_inp = [] # input of calls that resulted in decrease
    self.decreasing_list_calls_res = [] # result of calls that resulted in decrease
    self.list_callback_inp = [] # only appends inputs on callback, as such they correspond to the iterations
    self.list_callback_res = [] # only appends results on callback, as such they correspond to the iterations

def simulate(self, x):
    """Executes the actual simulation and returns the result, while
    updating the lists too. Pass to optimizer without arguments or
    parentheses."""
    result = self.f(x) # the actual evaluation of the function
    if not self.num_calls: # first call is stored in all lists
        self.decreasing_list_calls_inp.append(x)
        self.decreasing_list_calls_res.append(result)
        self.list_callback_inp.append(x)
        self.list_callback_res.append(result)
    elif result < self.decreasing_list_calls_res[-1]:
        self.decreasing_list_calls_inp.append(x)
        self.decreasing_list_calls_res.append(result)
    self.list_calls_inp.append(x)
    self.list_calls_res.append(result)
    self.num_calls += 1
    return result

def callback(self, xk, *_):
    """Callback function that can be used by optimizers of scipy.optimize.
    The third argument "*_" makes sure that it still works when the
    optimizer calls the callback function with more than one argument. Pass
    to optimizer without arguments or parentheses."""
    s1 = ""
    xk = np.atleast_1d(xk)
    # search backwards in input list for input corresponding to xk
    for i, x in reversed(list(enumerate(self.list_calls_inp))):
        x = np.atleast_1d(x)
        if np.allclose(x, xk):
            break

    for comp in xk:
        s1 += f"{comp:10.5e}\t"
    s1 += f"{self.list_calls_res[i]:10.5e}"

    self.list_callback_inp.append(xk)
    self.list_callback_res.append(self.list_calls_res[i])

    if not self.callback_count:
        s0 = ""
        for j, _ in enumerate(xk):
            tmp = f"Comp-{j+1}"
            s0 += f"{tmp:10s}\t"
        s0 += "Objective"
        print(s0)
    print(s1)
    self.callback_count += 1

A simple test can be defined

可以定义一个简单的测试

from scipy.optimize import minimize, rosen
ros_sim = Simulator(rosen)
minimize(ros_sim.simulate, [0, 0], method='BFGS', callback=ros_sim.callback, options={"disp": True})

print(f"Number of calls to Simulator instance {ros_sim.num_calls}")

resulting in:

导致:

Comp-1          Comp-2          Objective
1.76348e-01     -1.31390e-07    7.75116e-01
2.85778e-01     4.49433e-02     6.44992e-01
3.14130e-01     9.14198e-02     4.75685e-01
4.26061e-01     1.66413e-01     3.52251e-01
5.47657e-01     2.69948e-01     2.94496e-01
5.59299e-01     3.00400e-01     2.09631e-01
6.49988e-01     4.12880e-01     1.31733e-01
7.29661e-01     5.21348e-01     8.53096e-02
7.97441e-01     6.39950e-01     4.26607e-02
8.43948e-01     7.08872e-01     2.54921e-02
8.73649e-01     7.56823e-01     2.01121e-02
9.05079e-01     8.12892e-01     1.29502e-02
9.38085e-01     8.78276e-01     4.13206e-03
9.73116e-01     9.44072e-01     1.55308e-03
9.86552e-01     9.73498e-01     1.85366e-04
9.99529e-01     9.98598e-01     2.14298e-05
9.99114e-01     9.98178e-01     1.04837e-06
9.99913e-01     9.99825e-01     7.61051e-09
9.99995e-01     9.99989e-01     2.83979e-11
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 19
         Function evaluations: 96
         Gradient evaluations: 24
Number of calls to Simulator instance 96

Of course this is just a template, it can be adjusted to your needs. It does not provide all information about the status of the optimizer (like e.g. in the Optimization Toolbox of MATLAB), but at least you have some idea of the progress of the optimization.

当然,这只是一个模板,可以根据您的需要进行调整。它不提供有关优化器状态的所有信息(例如在 MATLAB 的优化工具箱中),但至少您对优化的进度有所了解。

A similar approach can be found here, without using the callback function. In my approach the callback function is used to print output exactly when the optimizer has finished an iteration, and not every single function call.

类似的方法可以在这里找到,不使用回调函数。在我的方法中,回调函数用于在优化器完成迭代时准确打印输出,而不是每个函数调用。

回答by mhass

It is also possible to include a simple print() statement in the function to be minimized. If you import the function you can create a wapper.

也可以在要最小化的函数中包含一个简单的 print() 语句。如果您导入该功能,您可以创建一个wapper。

import numpy as np
from scipy.optimize import minimize


def rosen(X): #Rosenbrock function
    print(X)
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
minimize(rosen, 
         x0)