GPy.inference.optimization package¶
Submodules¶
GPy.inference.optimization.conjugate_gradient_descent module¶
-
class
GPy.inference.optimization.conjugate_gradient_descent.Async_Optimize[source]¶ Bases:
object-
SENTINEL= 'SENTINEL'¶
-
callback(*x)¶
-
opt(f, df, x0, callback=None, update_rule=<class GPy.inference.optimization.gradient_descent_update_rules.FletcherReeves>, messages=0, maxiter=5000.0, max_f_eval=15000.0, gtol=1e-06, report_every=10, *args, **kwargs)[source]¶
-
opt_async(f, df, x0, callback, update_rule=<class GPy.inference.optimization.gradient_descent_update_rules.PolakRibiere>, messages=0, maxiter=5000.0, max_f_eval=15000.0, gtol=1e-06, report_every=10, *args, **kwargs)[source]¶
-
runsignal= <multiprocessing.synchronize.Event object>¶
-
-
class
GPy.inference.optimization.conjugate_gradient_descent.CGD[source]¶ Bases:
GPy.inference.optimization.conjugate_gradient_descent.Async_OptimizeConjugate gradient descent algorithm to minimize function f with gradients df, starting at x0 with update rule update_rule
if df returns tuple (grad, natgrad) it will optimize according to natural gradient rules
-
opt(*a, **kw)[source]¶ - opt(self, f, df, x0, callback=None, update_rule=FletcherReeves,
- messages=0, maxiter=5e3, max_f_eval=15e3, gtol=1e-6, report_every=10, *args, **kwargs)
Minimize f, calling callback every report_every iterations with following syntax:
callback(xi, fi, gi, iteration, function_calls, gradient_calls, status_message)if df returns tuple (grad, natgrad) it will optimize according to natural gradient rules
f, and df will be called with
f(xi, *args, **kwargs) df(xi, *args, **kwargs)returns
x_opt, f_opt, g_opt, iteration, function_calls, gradient_calls, status_messageat end of optimization
-
opt_async(*a, **kw)[source]¶ - opt_async(self, f, df, x0, callback, update_rule=FletcherReeves,
- messages=0, maxiter=5e3, max_f_eval=15e3, gtol=1e-6, report_every=10, *args, **kwargs)
callback gets called every report_every iterations
callback(xi, fi, gi, iteration, function_calls, gradient_calls, status_message)if df returns tuple (grad, natgrad) it will optimize according to natural gradient rules
f, and df will be called with
f(xi, *args, **kwargs) df(xi, *args, **kwargs)Returns:
Started Process object, optimizing asynchronouslyCalls:
callback(x_opt, f_opt, g_opt, iteration, function_calls, gradient_calls, status_message)at end of optimization!
-
opt_name= 'Conjugate Gradient Descent'¶
-
GPy.inference.optimization.gradient_descent_update_rules module¶
-
class
GPy.inference.optimization.gradient_descent_update_rules.FletcherReeves(initgrad, initgradnat=None)[source]¶ Bases:
GPy.inference.optimization.gradient_descent_update_rules.GDUpdateRuleFletcher Reeves update rule for gamma
-
class
GPy.inference.optimization.gradient_descent_update_rules.GDUpdateRule(initgrad, initgradnat=None)[source]¶
-
class
GPy.inference.optimization.gradient_descent_update_rules.PolakRibiere(initgrad, initgradnat=None)[source]¶ Bases:
GPy.inference.optimization.gradient_descent_update_rules.GDUpdateRuleFletcher Reeves update rule for gamma
GPy.inference.optimization.optimization module¶
-
class
GPy.inference.optimization.optimization.Optimizer(x_init, messages=False, model=None, max_f_eval=10000.0, max_iters=1000.0, ftol=None, gtol=None, xtol=None, bfgs_factor=None)[source]¶ Superclass for all the optimizers.
Parameters: - x_init – initial set of parameters
- f_fp – function that returns the function AND the gradients at the same time
- f – function to optimize
- fp – gradients
- messages ((True | False)) – print messages from the optimizer?
- max_f_eval – maximum number of function evaluations
Return type: optimizer object.
GPy.inference.optimization.scg module¶
-
GPy.inference.optimization.scg.SCG(f, gradf, x, optargs=(), maxiters=500, max_f_eval=inf, display=True, xtol=None, ftol=None, gtol=None)[source]¶ Optimisation through Scaled Conjugate Gradients (SCG)
f: the objective function gradf : the gradient function (should return a 1D np.ndarray) x : the initial condition
Returns x the optimal value for x flog : a list of all the objective values function_eval number of fn evaluations status: string describing convergence status
GPy.inference.optimization.sgd module¶
GPy.inference.optimization.stochastics module¶
-
class
GPy.inference.optimization.stochastics.SparseGPMissing(model, batchsize=1)[source]¶ Bases:
GPy.inference.optimization.stochastics.StochasticStorage
-
class
GPy.inference.optimization.stochastics.SparseGPStochastics(model, batchsize=1)[source]¶ Bases:
GPy.inference.optimization.stochastics.StochasticStorageFor the sparse gp we need to store the dimension we are in, and the indices corresponding to those
-
class
GPy.inference.optimization.stochastics.StochasticStorage(model)[source]¶ Bases:
objectThis is a container for holding the stochastic parameters, such as subset indices or step length and so on.