GPy.inference.optimization package

Submodules

GPy.inference.optimization.conjugate_gradient_descent module

class GPy.inference.optimization.conjugate_gradient_descent.Async_Optimize[source]

Bases: object

SENTINEL = 'SENTINEL'
async_callback_collect(q)[source]
callback(*x)
opt(f, df, x0, callback=None, update_rule=<class GPy.inference.optimization.gradient_descent_update_rules.FletcherReeves>, messages=0, maxiter=5000.0, max_f_eval=15000.0, gtol=1e-06, report_every=10, *args, **kwargs)[source]
opt_async(f, df, x0, callback, update_rule=<class GPy.inference.optimization.gradient_descent_update_rules.PolakRibiere>, messages=0, maxiter=5000.0, max_f_eval=15000.0, gtol=1e-06, report_every=10, *args, **kwargs)[source]
runsignal = <multiprocessing.synchronize.Event object>
class GPy.inference.optimization.conjugate_gradient_descent.CGD[source]

Bases: GPy.inference.optimization.conjugate_gradient_descent.Async_Optimize

Conjugate gradient descent algorithm to minimize function f with gradients df, starting at x0 with update rule update_rule

if df returns tuple (grad, natgrad) it will optimize according to natural gradient rules

opt(*a, **kw)[source]
opt(self, f, df, x0, callback=None, update_rule=FletcherReeves,
messages=0, maxiter=5e3, max_f_eval=15e3, gtol=1e-6, report_every=10, *args, **kwargs)

Minimize f, calling callback every report_every iterations with following syntax:

callback(xi, fi, gi, iteration, function_calls, gradient_calls, status_message)

if df returns tuple (grad, natgrad) it will optimize according to natural gradient rules

f, and df will be called with

f(xi, *args, **kwargs) df(xi, *args, **kwargs)

returns

x_opt, f_opt, g_opt, iteration, function_calls, gradient_calls, status_message

at end of optimization

opt_async(*a, **kw)[source]
opt_async(self, f, df, x0, callback, update_rule=FletcherReeves,
messages=0, maxiter=5e3, max_f_eval=15e3, gtol=1e-6, report_every=10, *args, **kwargs)

callback gets called every report_every iterations

callback(xi, fi, gi, iteration, function_calls, gradient_calls, status_message)

if df returns tuple (grad, natgrad) it will optimize according to natural gradient rules

f, and df will be called with

f(xi, *args, **kwargs) df(xi, *args, **kwargs)

Returns:

Started Process object, optimizing asynchronously

Calls:

callback(x_opt, f_opt, g_opt, iteration, function_calls, gradient_calls, status_message)

at end of optimization!

opt_name = 'Conjugate Gradient Descent'

GPy.inference.optimization.gradient_descent_update_rules module

class GPy.inference.optimization.gradient_descent_update_rules.FletcherReeves(initgrad, initgradnat=None)[source]

Bases: GPy.inference.optimization.gradient_descent_update_rules.GDUpdateRule

Fletcher Reeves update rule for gamma

class GPy.inference.optimization.gradient_descent_update_rules.GDUpdateRule(initgrad, initgradnat=None)[source]
class GPy.inference.optimization.gradient_descent_update_rules.PolakRibiere(initgrad, initgradnat=None)[source]

Bases: GPy.inference.optimization.gradient_descent_update_rules.GDUpdateRule

Fletcher Reeves update rule for gamma

GPy.inference.optimization.optimization module

class GPy.inference.optimization.optimization.Optimizer(x_init, messages=False, model=None, max_f_eval=10000.0, max_iters=1000.0, ftol=None, gtol=None, xtol=None, bfgs_factor=None)[source]

Superclass for all the optimizers.

Parameters:
  • x_init – initial set of parameters
  • f_fp – function that returns the function AND the gradients at the same time
  • f – function to optimize
  • fp – gradients
  • messages ((True | False)) – print messages from the optimizer?
  • max_f_eval – maximum number of function evaluations
Return type:

optimizer object.

opt(f_fp=None, f=None, fp=None)[source]
plot()[source]

See GPy.plotting.matplot_dep.inference_plots

run(**kwargs)[source]
GPy.inference.optimization.optimization.get_optimizer(f_min)[source]
class GPy.inference.optimization.optimization.opt_SCG(*args, **kwargs)[source]

Bases: GPy.inference.optimization.optimization.Optimizer

opt(f_fp=None, f=None, fp=None)[source]
class GPy.inference.optimization.optimization.opt_lbfgsb(*args, **kwargs)[source]

Bases: GPy.inference.optimization.optimization.Optimizer

opt(f_fp=None, f=None, fp=None)[source]

Run the optimizer

class GPy.inference.optimization.optimization.opt_rasm(*args, **kwargs)[source]

Bases: GPy.inference.optimization.optimization.Optimizer

opt(f_fp=None, f=None, fp=None)[source]

Run Rasmussen’s Conjugate Gradient optimizer

class GPy.inference.optimization.optimization.opt_simplex(*args, **kwargs)[source]

Bases: GPy.inference.optimization.optimization.Optimizer

opt(f_fp=None, f=None, fp=None)[source]

The simplex optimizer does not require gradients.

class GPy.inference.optimization.optimization.opt_tnc(*args, **kwargs)[source]

Bases: GPy.inference.optimization.optimization.Optimizer

opt(f_fp=None, f=None, fp=None)[source]

Run the TNC optimizer

GPy.inference.optimization.scg module

GPy.inference.optimization.scg.SCG(f, gradf, x, optargs=(), maxiters=500, max_f_eval=inf, display=True, xtol=None, ftol=None, gtol=None)[source]

Optimisation through Scaled Conjugate Gradients (SCG)

f: the objective function gradf : the gradient function (should return a 1D np.ndarray) x : the initial condition

Returns x the optimal value for x flog : a list of all the objective values function_eval number of fn evaluations status: string describing convergence status

GPy.inference.optimization.scg.exponents(fnow, current_grad)[source]
GPy.inference.optimization.scg.print_out(len_maxiters, fnow, current_grad, beta, iteration)[source]

GPy.inference.optimization.sgd module

GPy.inference.optimization.stochastics module

class GPy.inference.optimization.stochastics.SparseGPMissing(model, batchsize=1)[source]

Bases: GPy.inference.optimization.stochastics.StochasticStorage

class GPy.inference.optimization.stochastics.SparseGPStochastics(model, batchsize=1)[source]

Bases: GPy.inference.optimization.stochastics.StochasticStorage

For the sparse gp we need to store the dimension we are in, and the indices corresponding to those

do_stochastics()[source]
reset()[source]
class GPy.inference.optimization.stochastics.StochasticStorage(model)[source]

Bases: object

This is a container for holding the stochastic parameters, such as subset indices or step length and so on.

do_stochastics()[source]

Update the internal state to the next batch of the stochastic descent algorithm.

reset()[source]

Reset the state of this stochastics generator.

Module contents