GPy.models package¶
Submodules¶
GPy.models.bayesian_gplvm module¶
-
class
GPy.models.bayesian_gplvm.BayesianGPLVM(Y, input_dim, X=None, X_variance=None, init='PCA', num_inducing=10, Z=None, kernel=None, inference_method=None, likelihood=None, name='bayesian gplvm', mpi_comm=None, normalizer=None, missing_data=False, stochastic=False, batchsize=1)[source]¶ Bases:
GPy.core.sparse_gp_mpi.SparseGP_MPIBayesian Gaussian Process Latent Variable Model
Parameters: - Y (np.ndarray| GPy.likelihood instance) – observed data (np.ndarray) or GPy.likelihood
- input_dim (int) – latent dimensionality
- init (‘PCA’|’random’) – initialisation method for the latent space
-
do_test_latents(Y)[source]¶ Compute the latent representation for a set of new points Y
Notes: This will only work with a univariate Gaussian likelihood (for now)
-
get_X_gradients(X)[source]¶ Get the gradients of the posterior distribution of X in its specific form.
-
plot_latent(labels=None, which_indices=None, resolution=50, ax=None, marker='o', s=40, fignum=None, plot_inducing=True, legend=True, plot_limits=None, aspect='auto', updates=False, predict_kwargs={}, imshow_kwargs={})[source]¶
GPy.models.bayesian_gplvm_minibatch module¶
-
class
GPy.models.bayesian_gplvm_minibatch.BayesianGPLVMMiniBatch(Y, input_dim, X=None, X_variance=None, init='PCA', num_inducing=10, Z=None, kernel=None, inference_method=None, likelihood=None, name='bayesian gplvm', normalizer=None, missing_data=False, stochastic=False, batchsize=1)[source]¶ Bases:
GPy.models.sparse_gp_minibatch.SparseGPMiniBatchBayesian Gaussian Process Latent Variable Model
Parameters: - Y (np.ndarray| GPy.likelihood instance) – observed data (np.ndarray) or GPy.likelihood
- input_dim (int) – latent dimensionality
- init (‘PCA’|’random’) – initialisation method for the latent space
-
do_test_latents(Y)[source]¶ Compute the latent representation for a set of new points Y
Notes: This will only work with a univariate Gaussian likelihood (for now)
-
get_X_gradients(X)[source]¶ Get the gradients of the posterior distribution of X in its specific form.
-
plot_latent(labels=None, which_indices=None, resolution=50, ax=None, marker='o', s=40, fignum=None, plot_inducing=True, legend=True, plot_limits=None, aspect='auto', updates=False, predict_kwargs={}, imshow_kwargs={})[source]¶
GPy.models.bcgplvm module¶
-
class
GPy.models.bcgplvm.BCGPLVM(Y, input_dim, init='PCA', X=None, kernel=None, normalize_Y=False, mapping=None)[source]¶ Bases:
GPy.models.gplvm.GPLVMBack constrained Gaussian Process Latent Variable Model
Parameters: - Y (np.ndarray) – observed data
- input_dim (int) – latent dimensionality
- init (‘PCA’|’random’) – initialisation method for the latent space
- mapping (GPy.core.Mapping object) – mapping for back constraint
GPy.models.gp_classification module¶
-
class
GPy.models.gp_classification.GPClassification(X, Y, kernel=None, Y_metadata=None)[source]¶ Bases:
GPy.core.gp.GPGaussian Process classification
This is a thin wrapper around the models.GP class, with a set of sensible defaults
Parameters: - X – input observations
- Y – observed values, can be None if likelihood is not None
- kernel – a GPy kernel, defaults to rbf
Note
Multiple independent outputs are allowed using columns of Y
GPy.models.gp_coregionalized_regression module¶
-
class
GPy.models.gp_coregionalized_regression.GPCoregionalizedRegression(X_list, Y_list, kernel=None, likelihoods_list=None, name='GPCR', W_rank=1, kernel_name='coreg')[source]¶ Bases:
GPy.core.gp.GPGaussian Process model for heteroscedastic multioutput regression
This is a thin wrapper around the models.GP class, with a set of sensible defaults
Parameters: - X_list (list of numpy arrays) – list of input observations corresponding to each output
- Y_list (list of numpy arrays) – list of observed values related to the different noise models
- kernel (None | GPy.kernel defaults) – a GPy kernel, defaults to RBF ** Coregionalized
- name (string) – model name
- W_rank (integer) – number tuples of the corregionalization parameters ‘W’ (see coregionalize kernel documentation)
- kernel_name (string) – name of the kernel
Likelihoods_list: a list of likelihoods, defaults to list of Gaussian likelihoods
GPy.models.gp_heteroscedastic_regression module¶
-
class
GPy.models.gp_heteroscedastic_regression.GPHeteroscedasticRegression(X, Y, kernel=None, Y_metadata=None)[source]¶ Bases:
GPy.core.gp.GPGaussian Process model for heteroscedastic regression
This is a thin wrapper around the models.GP class, with a set of sensible defaults
Parameters: - X – input observations
- Y – observed values
- kernel – a GPy kernel, defaults to rbf
GPy.models.gp_kronecker_gaussian_regression module¶
-
class
GPy.models.gp_kronecker_gaussian_regression.GPKroneckerGaussianRegression(X1, X2, Y, kern1, kern2, noise_var=1.0, name='KGPR')[source]¶ Bases:
GPy.core.model.ModelKronecker GP regression
Take two kernels computed on separate spaces K1(X1), K2(X2), and a data matrix Y which is f size (N1, N2).
The effective covaraince is np.kron(K2, K1) The effective data is vec(Y) = Y.flatten(order=’F’)
The noise must be iid Gaussian.
See Stegle et al. @inproceedings{stegle2011efficient,
title={Efficient inference in matrix-variate gaussian models with $backslash$ iid observation noise}, author={Stegle, Oliver and Lippert, Christoph and Mooij, Joris M and Lawrence, Neil D and Borgwardt, Karsten M}, booktitle={Advances in Neural Information Processing Systems}, pages={630–638}, year={2011}}
-
predict(X1new, X2new)[source]¶ Return the predictive mean and variance at a series of new points X1new, X2new Only returns the diagonal of the predictive variance, for now.
Parameters: - X1new (np.ndarray, Nnew x self.input_dim1) – The points at which to make a prediction
- X2new (np.ndarray, Nnew x self.input_dim2) – The points at which to make a prediction
-
GPy.models.gp_multioutput_regression module¶
GPy.models.gp_regression module¶
-
class
GPy.models.gp_regression.GPRegression(X, Y, kernel=None, Y_metadata=None, normalizer=None)[source]¶ Bases:
GPy.core.gp.GPGaussian Process model for regression
This is a thin wrapper around the models.GP class, with a set of sensible defaults
Parameters: - X – input observations
- Y – observed values
- kernel – a GPy kernel, defaults to rbf
- normalizer (Norm) –
[False]
Normalize Y with the norm given. If normalizer is False, no normalization will be done If it is None, we use GaussianNorm(alization)
Note
Multiple independent outputs are allowed using columns of Y
GPy.models.gp_var_gauss module¶
-
class
GPy.models.gp_var_gauss.GPVariationalGaussianApproximation(X, Y, kernel=None)[source]¶ Bases:
GPy.core.model.ModelThe Variational Gaussian Approximation revisited implementation for regression
- @article{Opper:2009,
- title = {The Variational Gaussian Approximation Revisited}, author = {Opper, Manfred and Archambeau, C{‘e}dric}, journal = {Neural Comput.}, year = {2009}, pages = {786–792},
}
GPy.models.gplvm module¶
-
class
GPy.models.gplvm.GPLVM(Y, input_dim, init='PCA', X=None, kernel=None, name='gplvm')[source]¶ Bases:
GPy.core.gp.GPGaussian Process Latent Variable Model
GPy.models.gradient_checker module¶
-
class
GPy.models.gradient_checker.GradientChecker(f, df, x0, names=None, *args, **kwargs)[source]¶ Bases:
GPy.core.model.Model
GPy.models.mrd module¶
-
class
GPy.models.mrd.MRD(Ylist, input_dim, X=None, X_variance=None, initx='PCA', initz='permute', num_inducing=10, Z=None, kernel=None, inference_method=None, likelihoods=None, name='mrd', Ynames=None, normalizer=False, stochastic=False, batchsize=10)[source]¶ Bases:
GPy.models.bayesian_gplvm_minibatch.BayesianGPLVMMiniBatch!WARNING: This is bleeding edge code and still in development. Functionality may change fundamentally during development!
Apply MRD to all given datasets Y in Ylist.
Y_i in [n x p_i]
If Ylist is a dictionary, the keys of the dictionary are the names, and the values are the different datasets to compare.
The samples n in the datasets need to match up, whereas the dimensionality p_d can differ.
Parameters: - Ylist ([array-like]) – List of datasets to apply MRD on
- input_dim (int) – latent dimensionality
- X (array-like) – mean of starting latent space q in [n x q]
- X_variance (array-like) – variance of starting latent space q in [n x q]
- initx ([‘concat’|’single’|’random’]) –
initialisation method for the latent space :
- ‘concat’ - PCA on concatenation of all datasets
- ‘single’ - Concatenation of PCA on datasets, respectively
- ‘random’ - Random draw from a Normal(0,1)
- initz (‘permute’|’random’) – initialisation method for inducing inputs
- num_inducing – number of inducing inputs to use
- Z – initial inducing inputs
- kernel ([GPy.kernels.kernels] | GPy.kernels.kernels | None (default)) – list of kernels or kernel to copy for each output
- :param :class:`~GPy.inference.latent_function_inference inference_method:
- InferenceMethodList of inferences, or one inference method for all
:param
likelihoodslikelihoods: the likelihoods to use :param str name: the name of this model :param [str] Ynames: the names for the datasets given, must be of equal length as Ylist or None :param bool|Norm normalizer: How to normalize the data? :param bool stochastic: Should this model be using stochastic gradient descent over the dimensions? :param bool|[bool] batchsize: either one batchsize for all, or one batchsize per dataset.-
plot_latent(labels=None, which_indices=None, resolution=50, ax=None, marker='o', s=40, fignum=None, plot_inducing=True, legend=True, plot_limits=None, aspect='auto', updates=False, predict_kwargs={}, imshow_kwargs={})[source]¶ see plotting.matplot_dep.dim_reduction_plots.plot_latent if predict_kwargs is None, will plot latent spaces for 0th dataset (and kernel), otherwise give predict_kwargs=dict(Yindex=’index’) for plotting only the latent space of dataset with ‘index’.
GPy.models.sparse_gp_classification module¶
-
class
GPy.models.sparse_gp_classification.SparseGPClassification(X, Y=None, likelihood=None, kernel=None, Z=None, num_inducing=10, Y_metadata=None)[source]¶ Bases:
GPy.core.sparse_gp.SparseGPsparse Gaussian Process model for classification
This is a thin wrapper around the sparse_GP class, with a set of sensible defaults
Parameters: - X – input observations
- Y – observed values
- likelihood – a GPy likelihood, defaults to Binomial with probit link_function
- kernel – a GPy kernel, defaults to rbf+white
- normalize_X (False|True) – whether to normalize the input data before computing (predictions will be in original scales)
- normalize_Y (False|True) – whether to normalize the input data before computing (predictions will be in original scales)
Return type: model object
GPy.models.sparse_gp_coregionalized_regression module¶
-
class
GPy.models.sparse_gp_coregionalized_regression.SparseGPCoregionalizedRegression(X_list, Y_list, Z_list=[], kernel=None, likelihoods_list=None, num_inducing=10, X_variance=None, name='SGPCR', W_rank=1, kernel_name='coreg')[source]¶ Bases:
GPy.core.sparse_gp.SparseGPSparse Gaussian Process model for heteroscedastic multioutput regression
This is a thin wrapper around the SparseGP class, with a set of sensible defaults
Parameters: - X_list (list of numpy arrays) – list of input observations corresponding to each output
- Y_list (list of numpy arrays) – list of observed values related to the different noise models
- Z_list (empty list | list of numpy arrays) – list of inducing inputs (optional)
- kernel (None | GPy.kernel defaults) – a GPy kernel, defaults to RBF ** Coregionalized
- num_inducing (integer | list of integers) – number of inducing inputs, defaults to 10 per output (ignored if Z_list is not empty)
- name (string) – model name
- W_rank (integer) – number tuples of the corregionalization parameters ‘W’ (see coregionalize kernel documentation)
- kernel_name (string) – name of the kernel
Likelihoods_list: a list of likelihoods, defaults to list of Gaussian likelihoods
GPy.models.sparse_gp_minibatch module¶
-
class
GPy.models.sparse_gp_minibatch.SparseGPMiniBatch(X, Y, Z, kernel, likelihood, inference_method=None, name='sparse gp', Y_metadata=None, normalizer=False, missing_data=False, stochastic=False, batchsize=1)[source]¶ Bases:
GPy.core.gp.GPA general purpose Sparse GP model‘’’ Created on 3 Nov 2014
@author: maxz ‘’‘
This model allows (approximate) inference using variational DTC or FITC (Gaussian likelihoods) as well as non-conjugate sparse methods based on these.
param X: inputs type X: np.ndarray (num_data x input_dim) param likelihood: a likelihood instance, containing the observed data type likelihood: GPy.likelihood.(Gaussian | EP | Laplace) param kernel: the kernel (covariance function). See link kernels type kernel: a GPy.kern.kern instance param X_variance: The uncertainty in the measurements of X (Gaussian variance) type X_variance: np.ndarray (num_data x input_dim) | None param Z: inducing inputs type Z: np.ndarray (num_inducing x input_dim) param num_inducing: Number of inducing points (optional, default 10. Ignored if Z is not None) type num_inducing: int
GPy.models.sparse_gp_multioutput_regression module¶
GPy.models.sparse_gp_regression module¶
-
class
GPy.models.sparse_gp_regression.SparseGPRegression(X, Y, kernel=None, Z=None, num_inducing=10, X_variance=None, normalizer=None, mpi_comm=None)[source]¶ Bases:
GPy.core.sparse_gp_mpi.SparseGP_MPIGaussian Process model for regression
This is a thin wrapper around the SparseGP class, with a set of sensible defalts
Parameters: - X – input observations
- Y – observed values
- kernel – a GPy kernel, defaults to rbf+white
- Z (np.ndarray (num_inducing x input_dim) | None) – inducing inputs (optional, see note)
- num_inducing (int) – number of inducing points (ignored if Z is passed, see note)
Return type: model object
Note
If no Z array is passed, num_inducing (default 10) points are selected from the data. Other wise num_inducing is ignored
Note
Multiple independent outputs are allowed using columns of Y
-
class
GPy.models.sparse_gp_regression.SparseGPRegressionUncertainInput(X, X_variance, Y, kernel=None, Z=None, num_inducing=10, normalizer=None)[source]¶ Bases:
GPy.core.sparse_gp.SparseGPGaussian Process model for regression with Gaussian variance on the inputs (X_variance)
This is a thin wrapper around the SparseGP class, with a set of sensible defalts
GPy.models.sparse_gplvm module¶
-
class
GPy.models.sparse_gplvm.SparseGPLVM(Y, input_dim, X=None, kernel=None, init='PCA', num_inducing=10)[source]¶ Bases:
GPy.models.sparse_gp_regression.SparseGPRegressionSparse Gaussian Process Latent Variable Model
Parameters: - Y (np.ndarray) – observed data
- input_dim (int) – latent dimensionality
- init (‘PCA’|’random’) – initialisation method for the latent space
GPy.models.ss_gplvm module¶
-
class
GPy.models.ss_gplvm.SSGPLVM(Y, input_dim, X=None, X_variance=None, Gamma=None, init='PCA', num_inducing=10, Z=None, kernel=None, inference_method=None, likelihood=None, name='Spike_and_Slab GPLVM', group_spike=False, mpi_comm=None, pi=None, learnPi=True, normalizer=False, **kwargs)[source]¶ Bases:
GPy.core.sparse_gp_mpi.SparseGP_MPISpike-and-Slab Gaussian Process Latent Variable Model
Parameters: - Y (np.ndarray| GPy.likelihood instance) – observed data (np.ndarray) or GPy.likelihood
- input_dim (int) – latent dimensionality
- init (‘PCA’|’random’) – initialisation method for the latent space
GPy.models.ss_mrd module¶
The Maniforld Relevance Determination model with the spike-and-slab prior
-
class
GPy.models.ss_mrd.SSMRD(Ylist, input_dim, X=None, X_variance=None, initx='PCA', initz='permute', num_inducing=10, Z=None, kernel=None, inference_method=None, likelihoods=None, name='ss_mrd', Ynames=None)[source]¶ Bases:
GPy.core.model.Model
GPy.models.svigp_regression module¶
GPy.models.warped_gp module¶
-
class
GPy.models.warped_gp.WarpedGP(X, Y, kernel=None, warping_function=None, warping_terms=3, normalize_X=False, normalize_Y=False)[source]¶ Bases:
GPy.core.gp.GP