GPy.core.parameterization package

Submodules

GPy.core.parameterization.domains module

(Hyper-)Parameter domains defined for priors and kern. These domains specify the legitimate realm of the parameters to live in.

_REAL :
real domain, all values in the real numbers are allowed
_POSITIVE:
positive domain, only positive real values are allowed
_NEGATIVE:
same as _POSITIVE, but only negative values are allowed
_BOUNDED:
only values within the bounded range are allowed, the bounds are specified withing the object with the bounded range

GPy.core.parameterization.index_operations module

class GPy.core.parameterization.index_operations.ParameterIndexOperations(constraints=None)[source]

Bases: object

This object wraps a dictionary, whos keys are _operations_ that we’d like to apply to a parameter array, and whose values are np integer arrays which index the parameter array appropriately.

A model instance will contain one instance of this class for each thing that needs indexing (i.e. constraints, ties and priors). Parameters within the model constain instances of the ParameterIndexOperationsView class, which can map from a ‘local’ index (starting 0) to this global index.

Here’s an illustration:

#======================================================================= model : 0 1 2 3 4 5 6 7 8 9 key1: 4 5 key2: 7 8

param1: 0 1 2 3 4 5 key1: 2 3 key2: 5

param2: 0 1 2 3 4 key1: 0 key2: 2 3 #=======================================================================

The views of this global index have a subset of the keys in this global (model) index.

Adding a new key (e.g. a constraint) to a view will cause the view to pass the new key to the global index, along with the local index and an offset. This global index then stores the key and the appropriate global index (which can be seen by the view).

See also: ParameterIndexOperationsView

add(prop, indices)[source]
clear()[source]
copy()[source]
indices()[source]
items()[source]
iterindices()[source]
iteritems()[source]
iterproperties()[source]
properties()[source]
properties_for(index)[source]

Returns a list of properties, such that each entry in the list corresponds to the element of the index given.

Example: let properties: ‘one’:[1,2,3,4], ‘two’:[3,5,6]

>>> properties_for([2,3,5])
[['one'], ['one', 'two'], ['two']]
properties_to_index_dict(index)[source]

Return a dictionary, containing properties as keys and indices as index Thus, the indices for each constraint, which is contained will be collected as one dictionary

Example: let properties: ‘one’:[1,2,3,4], ‘two’:[3,5,6]

>>> properties_to_index_dict([2,3,5])
{'one':[2,3], 'two':[3,5]}
remove(prop, indices)[source]
shift_left(start, size)[source]
shift_right(start, size)[source]
size
update(parameter_index_view, offset=0)[source]
class GPy.core.parameterization.index_operations.ParameterIndexOperationsView(param_index_operations, offset, size)[source]

Bases: object

add(prop, indices)[source]
clear()[source]
copy()[source]
indices()[source]
items()[source]
iterindices()[source]
iteritems()[source]
iterproperties()[source]
properties()[source]
properties_for(index)[source]

Returns a list of properties, such that each entry in the list corresponds to the element of the index given.

Example: let properties: ‘one’:[1,2,3,4], ‘two’:[3,5,6]

>>> properties_for([2,3,5])
[['one'], ['one', 'two'], ['two']]
properties_to_index_dict(index)[source]

Return a dictionary, containing properties as keys and indices as index Thus, the indices for each constraint, which is contained will be collected as one dictionary

Example: let properties: ‘one’:[1,2,3,4], ‘two’:[3,5,6]

>>> properties_to_index_dict([2,3,5])
{'one':[2,3], 'two':[3,5]}
remove(prop, indices)[source]
shift_left(start, size)[source]
shift_right(start, size)[source]
size
update(parameter_index_view, offset=0)[source]
GPy.core.parameterization.index_operations.combine_indices(arr1, arr2)[source]
GPy.core.parameterization.index_operations.extract_properties_to_index(index, props)[source]
GPy.core.parameterization.index_operations.index_empty(index)[source]
GPy.core.parameterization.index_operations.remove_indices(arr, to_remove)[source]

GPy.core.parameterization.lists_and_dicts module

class GPy.core.parameterization.lists_and_dicts.ArrayList[source]

Bases: list

List to store ndarray-likes in. It will look for ‘is’ instead of calling __eq__ on each element.

index(item)[source]
class GPy.core.parameterization.lists_and_dicts.IntArrayDict(default_factory=None)[source]

Bases: collections.defaultdict

class GPy.core.parameterization.lists_and_dicts.ObserverList[source]

Bases: object

A list which containts the observables. It only holds weak references to observers, such that unbound observers dont dangle in memory.

add(priority, observer, callble)[source]

Add an observer with priority and callble

flush()[source]

Make sure all weak references, which point to nothing are flushed (deleted)

remove(priority, observer, callble)[source]

Remove one observer, which had priority and callble.

GPy.core.parameterization.lists_and_dicts.intarray_default_factory()[source]

GPy.core.parameterization.observable_array module

class GPy.core.parameterization.observable_array.ObsAr(*a, **kw)[source]

Bases: numpy.ndarray, GPy.core.parameterization.parameter_core.Pickleable, GPy.core.parameterization.observable.Observable

An ndarray which reports changes to its observers. The observers can add themselves with a callable, which will be called every time this array changes. The callable takes exactly one argument, which is this array itself.

copy()[source]
values

GPy.core.parameterization.param module

class GPy.core.parameterization.param.Param(name, input_array, default_constraint=None, *a, **kw)[source]

Bases: GPy.core.parameterization.parameter_core.Parameterizable, GPy.core.parameterization.observable_array.ObsAr

Parameter object for GPy models.

Parameters:
  • name (str) – name of the parameter to be printed
  • input_array (numpy.ndarray) – array which this parameter handles
  • default_constraint – The default constraint for this parameter

You can add/remove constraints by calling constrain on the parameter itself, e.g:

  • self[:,1].constrain_positive()
  • self[0].tie_to(other)
  • self.untie()
  • self[:3,:].unconstrain()
  • self[1].fix()

Fixing parameters will fix them to the value they are right now. If you change the fixed value, it will be fixed to the new value!

See GPy.core.parameterized.Parameterized for more details on constraining etc.

build_pydot(G)[source]
copy()[source]
flattened_parameters
gradient

Return a view on the gradient, which is in the same shape as this parameter is. Note: this is not the real gradient array, it is just a view on it.

To work on the real gradient array use: self.full_gradient

is_fixed
num_params
param_array

As we are a leaf, this just returns self

parameter_names(add_self=False, adjust_for_printing=False, recursive=True)[source]
parameter_shapes
parameters = []
values

Return self as numpy array view

class GPy.core.parameterization.param.ParamConcatenation(params)[source]

Bases: object

checkgrad(verbose=0, step=1e-06, tolerance=0.001)[source]
constrain(constraint, warning=True)[source]
Parameters:
  • transform – the GPy.core.transformations.Transformation to constrain the this parameter to.
  • warning – print a warning if re-constraining parameters.

Constrain the parameter to the given GPy.core.transformations.Transformation.

constrain_bounded(lower, upper, warning=True)[source]
Parameters:
  • upper (lower,) – the limits to bound this parameter to
  • warning – print a warning if re-constraining parameters.

Constrain this parameter to lie within the given range.

constrain_fixed(value=None, warning=True, trigger_parent=True)[source]

Constrain this parameter to be fixed to the current value it carries.

Parameters:warning – print a warning for overwriting constraints.
constrain_negative(warning=True)[source]
Parameters:warning – print a warning if re-constraining parameters.

Constrain this parameter to the default negative constraint.

constrain_positive(warning=True)[source]
Parameters:warning – print a warning if re-constraining parameters.

Constrain this parameter to the default positive constraint.

fix(value=None, warning=True, trigger_parent=True)

Constrain this parameter to be fixed to the current value it carries.

Parameters:warning – print a warning for overwriting constraints.
unconstrain(*constraints)[source]
Parameters:transforms – The transformations to unconstrain from.

remove all GPy.core.transformations.Transformation transformats of this parameter object.

unconstrain_bounded(lower, upper)[source]
Parameters:upper (lower,) – the limits to unbound this parameter from

Remove (lower, upper) bounded constrain from this parameter/

unconstrain_fixed()[source]

This parameter will no longer be fixed.

unconstrain_negative()[source]

Remove negative constraint of this parameter.

unconstrain_positive()[source]

Remove positive constraint of this parameter.

unfix()

This parameter will no longer be fixed.

untie(*ties)[source]
update_all_params()[source]
values()[source]

GPy.core.parameterization.parameter_core module

Core module for parameterization. This module implements all parameterization techniques, split up in modular bits.

HierarchyError: raised when an error with the hierarchy occurs (circles etc.)

Observable: Observable Pattern for patameterization

class GPy.core.parameterization.parameter_core.Gradcheckable(*a, **kw)[source]

Bases: GPy.core.parameterization.parameter_core.Pickleable, GPy.core.parameterization.parameter_core.Parentable

Adds the functionality for an object to be gradcheckable. It is just a thin wrapper of a call to the highest parent for now. TODO: Can be done better, by only changing parameters of the current parameter handle, such that object hierarchy only has to change for those.

checkgrad(verbose=0, step=1e-06, tolerance=0.001, df_tolerance=1e-12)[source]

Check the gradient of this parameter with respect to the highest parent’s objective function. This is a three point estimate of the gradient, wiggling at the parameters with a stepsize step. The check passes if either the ratio or the difference between numerical and analytical gradient is smaller then tolerance.

Parameters:
  • verbose (bool) – whether each parameter shall be checked individually.
  • step (float) – the stepsize for the numerical three point gradient estimate.
  • tolerance (float) – the tolerance for the gradient ratio or difference.
  • df_tolerance (float) – the tolerance for df_tolerance
Note:-
The dF_ratio indicates the limit of accuracy of numerical gradients. If it is too small, e.g., smaller than 1e-12, the numerical gradients are usually not accurate enough for the tests (shown with blue).
exception GPy.core.parameterization.parameter_core.HierarchyError[source]

Bases: exceptions.Exception

Gets thrown when something is wrong with the parameter hierarchy.

class GPy.core.parameterization.parameter_core.Indexable(name, default_constraint=None, *a, **kw)[source]

Bases: GPy.core.parameterization.parameter_core.Nameable, GPy.core.parameterization.updateable.Updateable

Make an object constrainable with Priors and Transformations. TODO: Mappings!! Adding a constraint to a Parameter means to tell the highest parent that the constraint was added and making sure that all parameters covered by this object are indeed conforming to the constraint.

constrain() and unconstrain() are main methods here

constrain(transform, warning=True, trigger_parent=True)[source]
Parameters:
  • transform – the GPy.core.transformations.Transformation to constrain the this parameter to.
  • warning – print a warning if re-constraining parameters.

Constrain the parameter to the given GPy.core.transformations.Transformation.

constrain_bounded(lower, upper, warning=True, trigger_parent=True)[source]
Parameters:
  • upper (lower,) – the limits to bound this parameter to
  • warning – print a warning if re-constraining parameters.

Constrain this parameter to lie within the given range.

constrain_fixed(value=None, warning=True, trigger_parent=True)[source]

Constrain this parameter to be fixed to the current value it carries.

Parameters:warning – print a warning for overwriting constraints.
constrain_negative(warning=True, trigger_parent=True)[source]
Parameters:warning – print a warning if re-constraining parameters.

Constrain this parameter to the default negative constraint.

constrain_positive(warning=True, trigger_parent=True)[source]
Parameters:warning – print a warning if re-constraining parameters.

Constrain this parameter to the default positive constraint.

fix(value=None, warning=True, trigger_parent=True)

Constrain this parameter to be fixed to the current value it carries.

Parameters:warning – print a warning for overwriting constraints.
is_fixed
log_prior()[source]

evaluate the prior

set_prior(prior, warning=True)[source]

Set the prior for this object to prior. :param Prior prior: a prior to set for this parameter :param bool warning: whether to warn if another prior was set for this parameter

tie_together()[source]
unconstrain(*transforms)[source]
Parameters:transforms – The transformations to unconstrain from.

remove all GPy.core.transformations.Transformation transformats of this parameter object.

unconstrain_bounded(lower, upper)[source]
Parameters:upper (lower,) – the limits to unbound this parameter from

Remove (lower, upper) bounded constrain from this parameter/

unconstrain_fixed()[source]

This parameter will no longer be fixed.

unconstrain_negative()[source]

Remove negative constraint of this parameter.

unconstrain_positive()[source]

Remove positive constraint of this parameter.

unfix()

This parameter will no longer be fixed.

unset_priors(*priors)[source]

Un-set all priors given (in *priors) from this parameter handle.

class GPy.core.parameterization.parameter_core.Nameable(name, *a, **kw)[source]

Bases: GPy.core.parameterization.parameter_core.Gradcheckable

Make an object nameable inside the hierarchy.

hierarchy_name(adjust_for_printing=True)[source]

return the name for this object with the parents names attached by dots.

Parameters:adjust_for_printing (bool) – whether to call adjust_for_printing()

on the names, recursively

name

The name of this object

class GPy.core.parameterization.parameter_core.OptimizationHandlable(name, default_constraint=None, *a, **kw)[source]

Bases: GPy.core.parameterization.parameter_core.Indexable

This enables optimization handles on an Object as done in GPy 0.4.

..._optimizer_copy_transformed: make sure the transformations and constraints etc are handled

gradient_full

Note to users: This does not return the gradient in the right shape! Use self.gradient for the right gradient array.

To work on the gradient array, use this as the gradient handle. This method exists for in memory use of parameters. When trying to access the true gradient array, use this.

num_params

Return the number of parameters of this parameter_handle. Param objects will always return 0.

optimizer_array

Array for the optimizer to work on. This array always lives in the space for the optimizer. Thus, it is untransformed, going from Transformations.

Setting this array, will make sure the transformed parameters for this model will be set accordingly. It has to be set with an array, retrieved from this method, as e.g. fixing will resize the array.

The optimizer should only interfere with this array, such that transformations are secured.

parameter_names(add_self=False, adjust_for_printing=False, recursive=True)[source]

Get the names of all parameters of this model.

Parameters:
  • add_self (bool) – whether to add the own name in front of names
  • adjust_for_printing (bool) – whether to call adjust_name_for_printing on names
  • recursive (bool) – whether to traverse through hierarchy and append leaf node names
randomize(rand_gen=None, *args, **kwargs)[source]

Randomize the model. Make this draw from the prior if one exists, else draw from given random generator

Parameters:
  • rand_gen – np random number generator which takes args and kwargs
  • loc (flaot) – loc parameter for random number generator
  • scale (float) – scale parameter for random number generator
  • kwargs (args,) – will be passed through to random number generator
class GPy.core.parameterization.parameter_core.Parameterizable(*args, **kwargs)[source]

Bases: GPy.core.parameterization.parameter_core.OptimizationHandlable

A parameterisable class.

This class provides the parameters list (ArrayList) and standard parameter handling, such as {add|remove}_parameter(), traverse hierarchy and param_array, gradient_array and the empty parameters_changed().

This class is abstract and should not be instantiated. Use GPy.core.Parameterized() as node (or leaf) in the parameterized hierarchy. Use GPy.core.Param() for a leaf in the parameterized hierarchy.

gradient
num_params
param_array

Array representing the parameters of this class. There is only one copy of all parameters in memory, two during optimization.

!WARNING!: setting the parameter array MUST always be done in memory: m.param_array[:] = m_copy.param_array

parameters_changed()[source]

This method gets called when parameters have changed. Another way of listening to param changes is to add self as a listener to the param, such that updates get passed through. See :py:function:GPy.core.param.Observable.add_observer

save(filename, ftype='HDF5')[source]

Save all the model parameters into a file (HDF5 by default).

traverse(visit, *args, **kwargs)[source]

Traverse the hierarchy performing visit(self, *args, **kwargs) at every node passed by downwards. This function includes self!

See “visitor pattern” in literature. This is implemented in pre-order fashion.

Example: Collect all children:

children = [] self.traverse(children.append) print children

traverse_parents(visit, *args, **kwargs)[source]

Traverse the hierarchy upwards, visiting all parents and their children except self. See “visitor pattern” in literature. This is implemented in pre-order fashion.

Example:

parents = [] self.traverse_parents(parents.append) print parents

unfixed_param_array

Array representing the parameters of this class. There is only one copy of all parameters in memory, two during optimization.

!WARNING!: setting the parameter array MUST always be done in memory: m.param_array[:] = m_copy.param_array

class GPy.core.parameterization.parameter_core.Parentable(*args, **kwargs)[source]

Bases: object

Enable an Object to have a parent.

Additionally this adds the parent_index, which is the index for the parent to look for in its parameter list.

has_parent()[source]

Return whether this parentable object currently has a parent.

class GPy.core.parameterization.parameter_core.Pickleable(*a, **kw)[source]

Bases: object

Make an object pickleable (See python doc ‘pickling’).

This class allows for pickling support by Memento pattern. _getstate returns a memento of the class, which gets pickled. _setstate(<memento>) (re-)sets the state of the class to the memento

copy(memo=None, which=None)[source]

Returns a (deep) copy of the current parameter handle.

All connections to parents of the copy will be cut.

Parameters:
  • memo (dict) – memo for deepcopy
  • which (Parameterized) – parameterized object which started the copy process [default: self]
pickle(f, protocol=-1)[source]
Parameters:
  • f – either filename or open file object to write to. if it is an open buffer, you have to make sure to close it properly.
  • protocol – pickling protocol to use, python-pickle for details.
GPy.core.parameterization.parameter_core.adjust_name_for_printing(name)[source]

Make sure a name can be printed, alongside used as a variable name.

GPy.core.parameterization.parameterized module

class GPy.core.parameterization.parameterized.Parameterized(name=None, parameters=[], *a, **kw)[source]

Bases: GPy.core.parameterization.parameter_core.Parameterizable

Parameterized class

Say m is a handle to a parameterized class.

Printing parameters:

  • print m: prints a nice summary over all parameters

  • print m.name: prints details for param with name ‘name’

  • print m[regexp]: prints details for all the parameters

    which match (!) regexp

  • print m[‘’]: prints details for all parameters

Fields:

Name: The name of the param, can be renamed! Value: Shape or value, if one-valued Constrain: constraint of the param, curly “{c}” brackets indicate

some parameters are constrained by c. See detailed print to get exact constraints.

Tied_to: which paramter it is tied to.

Getting and setting parameters:

Set all values in param to one:

m.name.to.param = 1

Handling of constraining, fixing and tieing parameters:

You can constrain parameters by calling the constrain on the param itself, e.g:

  • m.name[:,1].constrain_positive()
  • m.name[0].tie_to(m.name[1])

Fixing parameters will fix them to the value they are right now. If you change the parameters value, the param will be fixed to the new value!

If you want to operate on all parameters use m[‘’] to wildcard select all paramters and concatenate them. Printing m[‘’] will result in printing of all parameters in detail.

add_parameter(*args, **kwargs)[source]
build_pydot(G=None)[source]
copy(memo=None)[source]
flattened_parameters
grep_param_names(regexp)[source]

create a list of parameters, matching regular expression regexp

Parameters:
  • parameters (list of or one GPy.core.param.Param) – the parameters to add
  • [index] – index of where to put parameters
  • _ignore_added_names (bool) – whether the name of the parameter overrides a possibly existing field

Add all parameters to this param class, you can insert parameters at any given index using the list.insert() syntax

convenience method for adding several parameters without gradient specification

parameter_shapes
remove_parameter(*args, **kwargs)[source]
Parameters:param – param object to remove from being a parameter of this parameterized object.
class GPy.core.parameterization.parameterized.ParametersChangedMeta[source]

Bases: type

GPy.core.parameterization.priors module

class GPy.core.parameterization.priors.DGPLVM(sigma2, lbl, x_shape)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.

Parameters:sigma2 – constant

Note

DGPLVM for Classification paper implementation

compute_Mi(cls)[source]
compute_Sb(cls, M_i, M_0)[source]
compute_Sw(cls, M_i)[source]
compute_cls(x)[source]
compute_indices(x)[source]
compute_listIndices(data_idx)[source]
compute_sig_alpha_W(data_idx, lst_idx_all, W_i)[source]
compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)[source]
compute_wj(data_idx, M_i)[source]
domain = 'real'
get_class_label(y)[source]
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
class GPy.core.parameterization.priors.DGPLVM_KFDA(lambdaa, sigma2, lbl, kern, x_shape)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the Discriminative Gaussian Process Latent Variable function using Kernel Fisher Discriminant Analysis by Seung-Jean Kim for implementing Face paper by Chaochao Lu.

Parameters:
  • lambdaa – constant
  • sigma2 – constant

Note

Surpassing Human-Level Face paper dgplvm implementation

compute_A(lst_ni)[source]
compute_a(lst_ni)[source]
compute_cls(x)[source]
compute_lst_ni()[source]
domain = 'real'
get_class_label(y)[source]
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
x_reduced(cls)[source]
class GPy.core.parameterization.priors.Gamma(a, b)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the Gamma probability function, coupled with random variables.

Parameters:
  • a – shape parameter
  • b – rate parameter (warning: it’s the inverse of the scale)

Note

Bishop 2006 notation is used throughout the code

domain = 'positive'
static from_EV(E, V)[source]

Creates an instance of a Gamma Prior by specifying the Expected value(s) and Variance(s) of the distribution.

Parameters:
  • E – expected value
  • V – variance
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
summary()[source]
class GPy.core.parameterization.priors.Gaussian(mu, sigma)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the univariate Gaussian probability function, coupled with random variables.

Parameters:
  • mu – mean
  • sigma – standard deviation

Note

Bishop 2006 notation is used throughout the code

domain = 'real'
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
class GPy.core.parameterization.priors.HalfT(A, nu)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the half student t probability function, coupled with random variables.

Parameters:
  • A – scale parameter
  • nu – degrees of freedom
domain = 'positive'
lnpdf(theta)[source]
lnpdf_grad(theta)[source]
rvs(n)[source]
class GPy.core.parameterization.priors.InverseGamma(a, b)[source]

Bases: GPy.core.parameterization.priors.Gamma

Implementation of the inverse-Gamma probability function, coupled with random variables.

Parameters:
  • a – shape parameter
  • b – rate parameter (warning: it’s the inverse of the scale)

Note

Bishop 2006 notation is used throughout the code

domain = 'positive'
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
class GPy.core.parameterization.priors.LogGaussian(mu, sigma)[source]

Bases: GPy.core.parameterization.priors.Gaussian

Implementation of the univariate log-Gaussian probability function, coupled with random variables.

Parameters:
  • mu – mean
  • sigma – standard deviation

Note

Bishop 2006 notation is used throughout the code

domain = 'positive'
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
class GPy.core.parameterization.priors.MultivariateGaussian(mu, var)[source]

Bases: GPy.core.parameterization.priors.Prior

Implementation of the multivariate Gaussian probability function, coupled with random variables.

Parameters:
  • mu – mean (N-dimensional array)
  • var – covariance matrix (NxN)

Note

Bishop 2006 notation is used throughout the code

domain = 'real'
lnpdf(x)[source]
lnpdf_grad(x)[source]
pdf(x)[source]
plot()[source]
rvs(n)[source]
summary()[source]
class GPy.core.parameterization.priors.Prior[source]

Bases: object

domain = None
pdf(x)[source]
plot()[source]
class GPy.core.parameterization.priors.Uniform(lower, upper)[source]

Bases: GPy.core.parameterization.priors.Prior

domain = 'real'
lnpdf(x)[source]
lnpdf_grad(x)[source]
rvs(n)[source]
GPy.core.parameterization.priors.gamma_from_EV(E, V)[source]

GPy.core.parameterization.ties_and_remappings module

class GPy.core.parameterization.ties_and_remappings.Fix(name=None, parameters=[], *a, **kw)[source]

Bases: GPy.core.parameterization.ties_and_remappings.Remapping

class GPy.core.parameterization.ties_and_remappings.Remapping(name=None, parameters=[], *a, **kw)[source]

Bases: GPy.core.parameterization.parameterized.Parameterized

callback()[source]
mapping()[source]

The return value of this function gives the values which the re-mapped parameters should take. Implement in sub-classes.

parameters_changed()[source]
class GPy.core.parameterization.ties_and_remappings.Tie(name='tie')[source]

Bases: GPy.core.parameterization.parameterized.Parameterized

The new parameter tie framework. (under development)

All the parameters tied together get a new parameter inside the Tie object. Its value should always be equal to all the tied parameters, and its gradient is the sum of all the tied parameters.

=====Implementation Details===== The Tie object should only exist on the top of param tree (the highest parent).

self.label_buf: It uses a label buffer that has the same length as all the parameters (self._highest_parent_.param_array). The buffer keeps track of all the tied parameters. All the tied parameters have a label (an interger) higher than 0, and the parameters that have the same label are tied together.

self.buf_index: An auxiliary index list for the global index of the tie parameter inside the Tie object.

TODO: * EVERYTHING

add_tied_parameter(p, p2=None)[source]

Tie the list of parameters p together (p2==None) or Tie the list of parameters p with the list of parameters p2 (p2!=None)

collate_gradient()[source]
getTieFlag(p=None)[source]
parameters_changed()[source]
propagate_val()[source]

GPy.core.parameterization.transformations module

class GPy.core.parameterization.transformations.Exponent[source]

Bases: GPy.core.parameterization.transformations.Transformation

domain = 'positive'
f(x)[source]
finv(x)[source]
gradfactor(f, df)[source]
initialize(f)[source]
class GPy.core.parameterization.transformations.Logexp[source]

Bases: GPy.core.parameterization.transformations.Transformation

domain = 'positive'
f(x)[source]
finv(f)[source]
gradfactor(f, df)[source]
initialize(f)[source]
class GPy.core.parameterization.transformations.LogexpClipped(lower=1e-06)[source]

Bases: GPy.core.parameterization.transformations.Logexp

domain = 'positive'
f(x)[source]
finv(f)[source]
gradfactor(f, df)[source]
initialize(f)[source]
log_max_bound = 230.25850929940458
log_min_bound = -23.025850929940457
max_bound = 1e+100
min_bound = 1e-10
class GPy.core.parameterization.transformations.LogexpNeg[source]

Bases: GPy.core.parameterization.transformations.Transformation

domain = 'positive'
f(x)[source]
finv(f)[source]
gradfactor(f, df)[source]
initialize(f)[source]
class GPy.core.parameterization.transformations.Logistic(lower, upper)[source]

Bases: GPy.core.parameterization.transformations.Transformation

domain = 'bounded'
f(x)[source]
finv(f)[source]
gradfactor(f, df)[source]
initialize(f)[source]
class GPy.core.parameterization.transformations.NegativeExponent[source]

Bases: GPy.core.parameterization.transformations.Exponent

domain = 'negative'
f(x)[source]
finv(f)[source]
gradfactor(f, df)[source]
initialize(f)[source]
class GPy.core.parameterization.transformations.NegativeLogexp[source]

Bases: GPy.core.parameterization.transformations.Transformation

domain = 'negative'
f(x)[source]
finv(f)[source]
gradfactor(f, df)[source]
initialize(f)[source]
logexp = Logexp
class GPy.core.parameterization.transformations.NormalEta(mu_indices, var_indices)[source]

Bases: GPy.core.parameterization.transformations.Transformation

f(theta)[source]
finv(muvar)[source]
gradfactor(muvar, dmuvar)[source]
initialize(f)[source]
class GPy.core.parameterization.transformations.NormalNaturalAntti(mu_indices, var_indices)[source]

Bases: GPy.core.parameterization.transformations.NormalTheta

gradfactor(muvar, dmuvar)[source]
initialize(f)[source]
class GPy.core.parameterization.transformations.NormalNaturalThroughEta(mu_indices, var_indices)[source]

Bases: GPy.core.parameterization.transformations.NormalEta

gradfactor(muvar, dmuvar)[source]
class GPy.core.parameterization.transformations.NormalNaturalThroughTheta(mu_indices, var_indices)[source]

Bases: GPy.core.parameterization.transformations.NormalTheta

gradfactor(muvar, dmuvar)[source]
class GPy.core.parameterization.transformations.NormalTheta(mu_indices, var_indices)[source]

Bases: GPy.core.parameterization.transformations.Transformation

f(theta)[source]
finv(muvar)[source]
gradfactor(muvar, dmuvar)[source]
initialize(f)[source]
class GPy.core.parameterization.transformations.Square[source]

Bases: GPy.core.parameterization.transformations.Transformation

domain = 'positive'
f(x)[source]
finv(x)[source]
gradfactor(f, df)[source]
initialize(f)[source]
class GPy.core.parameterization.transformations.Transformation[source]

Bases: object

domain = None
f(opt_param)[source]
finv(model_param)[source]
gradfactor(model_param, dL_dmodel_param)[source]

df(opt_param)_dopt_param evaluated at self.f(opt_param)=model_param, times the gradient dL_dmodel_param,

i.e.: define

rac{ rac{partial L}{partial f}left(left.partial f(x)}{partial x} ight|_{x=f^{-1}(f) ight)}

initialize(f)[source]

produce a sensible initial value for f(x)

plot(xlabel='transformed $\\theta$', ylabel='$\\theta$', axes=None, *args, **kw)[source]

GPy.core.parameterization.variational module

Created on 6 Nov 2013

@author: maxz

class GPy.core.parameterization.variational.NormalPosterior(means=None, variances=None, name='latent space', *a, **kw)[source]

Bases: GPy.core.parameterization.variational.VariationalPosterior

NormalPosterior distribution for variational approximations.

holds the means and variances for a factorizing multivariate normal distribution

plot(*args)[source]

Plot latent space X in 1D:

See GPy.plotting.matplot_dep.variational_plots

class GPy.core.parameterization.variational.NormalPrior(name='latent space', **kw)[source]

Bases: GPy.core.parameterization.variational.VariationalPrior

KL_divergence(variational_posterior)[source]
update_gradients_KL(variational_posterior)[source]
class GPy.core.parameterization.variational.SpikeAndSlabPosterior(means, variances, binary_prob, name='latent space')[source]

Bases: GPy.core.parameterization.variational.VariationalPosterior

The SpikeAndSlab distribution for variational approximations.

gamma_log_prob = <functools.partial object>
gamma_probabilities = <functools.partial object>
plot(*args, **kwargs)[source]

Plot latent space X in 1D:

See GPy.plotting.matplot_dep.variational_plots

set_gradients(grad)[source]
class GPy.core.parameterization.variational.SpikeAndSlabPrior(pi=None, learnPi=False, variance=1.0, name='SpikeAndSlabPrior', **kw)[source]

Bases: GPy.core.parameterization.variational.VariationalPrior

KL_divergence(variational_posterior)[source]
update_gradients_KL(variational_posterior)[source]
class GPy.core.parameterization.variational.VariationalPosterior(means=None, variances=None, name='latent space', *a, **kw)[source]

Bases: GPy.core.parameterization.parameterized.Parameterized

has_uncertain_inputs()[source]
set_gradients(grad)[source]
class GPy.core.parameterization.variational.VariationalPrior(name='latent space', **kw)[source]

Bases: GPy.core.parameterization.parameterized.Parameterized

KL_divergence(variational_posterior)[source]
update_gradients_KL(variational_posterior)[source]

updates the gradients for mean and variance in place

Module contents