mirror of
https://github.com/SheffieldML/GPy.git
synced 2026-04-24 20:36:23 +02:00
v1.10.0 (#908)
* Update self.num_data in GP when X is updated
* Update appveyor.yml
* Update setup.cfg
* Stop using legacy bdist_wininst
* fix: reorder brackets to avoid an n^2 array
* Minor fix to multioutput regression example, to clarify code + typo.
* added missing import
* corrected typo in function name
* fixed docstring and added more explanation
* changed ordering of explanation to get to the point fast and provide additional details after
* self.num_data and self.input_dim are set dynamically in class GP() after the shape of X. In MRD, the user-specific values are passed around until X is defined.
* fixed technical description of gradients_X()
* brushed up wording
* fix normalizer
* fix ImportError in likelihood.py
in function log_predictive_density_sampling
* Update setup.py
bump min require version of scipy to 1.3.0
* Add cython into installation requirement
* Coregionalized regression bugfix (#824)
* route default arg W_rank correctly (Addresses #823)
* Drop Python 2.7 support (fix #833)
* travis, appveyor: Add Python 3.8 build
* README: Fix scipy version number
* setup.py: Install scipy < 1.5.0 when using Python 3.5
* plotting_tests.py: Use os.makedirs instead of matplotlib.cbook.mkdirs (fix #844)
* Use super().__init__ consistently, instead of sometimes calling base class __init__ directly
* README.md: Source formatting, one badge per line
* README.md: Remove broken landscape badge (fix #831)
* README.md: Badges for devel and deploy (fix #830)
* ignore itermediary sphinx restructured text
* ignore vs code project settings file
* add yml config for readthedocs
* correct path
* drop epub and pdf builds (as per main GPy)
* typo
* headings and structure
* update copyright
* restructuring and smartening
* remove dead links
* reorder package docs
* rst "markup"
* change rst syntax
* makes sense for core to go first
* add placeholder
* initial core docs, class diagram
* lower level detail
* higher res diagrams
* layout changes for diagrams
resolve conflict
* better syntax
* redunant block
* introduction
* inheritance diagrams
* more on models
* kernel docs to kern.src
* moved doc back from kern.src to kern
* kern not kern.src in index
* better kernel description
* likelihoods
* placeholder
* add plotting to docs index
* summarise plotting
* clarification
* neater contents
* architecture diagram
* using pods
* build with dot
* more on examples
* introduction for utils package
* compromise formatting for sphinx
* correct likelihod definition
* parameterization of priors
* latent function inference intro and format
* maint: Remove tabs (and some trailing spaces)
* dpgplvm.py: Wrap long line + remove tabs
* dpgplvm.py: Fix typo in the header
* maint: Wrap very long lines (> 450 chars)
* maint: Wrap very long lines (> 400 chars)
* Add the link to the api doc on the readme page.
* remove deprecated parameter
* Update README.md
* new: Added to_dict() method to Ornstein-Uhlenbeck (OU) kernel
* fix: minor typos in README !minor
* added python 3.9 build following 4aa2ea9f5e to address https://github.com/SheffieldML/GPy/issues/881
* updated cython-generated c files for python 3.9 via `pyenv virtualenv 3.9.1 gpy391 && pyenv activate gpy391 && python setup.py build --force
* updated osx to macOS 10.15.7, JDK to 14.0.2, and XCode to Xcode 12.2 (#904)
The CI was broken. This commit fixes the CI. The root cause is reported in more detail in issue #905.
In short, the default macOS version (10.13, see the TravisCI docs) used in TravisCI isn't supported by brew which caused the brew install pandoc in the download_miniconda.sh pre-install script to hang and time out the build. It failed even on inert PRs (adding a line to README, e.g.). Now, with the updated macOS version (from 10.13 to 10.15), brew is supported and the brew install pandoc command succeeds and allows the remainder of the CI build and test sequence to succeed.
* incremented version
Co-authored-by: Masha Naslidnyk 🦉 <naslidny@amazon.co.uk>
Co-authored-by: Zhenwen Dai <zhenwendai@users.noreply.github.com>
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
Co-authored-by: Mark McLeod <mark.mcleod@mindfoundry.ai>
Co-authored-by: Sigrid Passano Hellan <sighellan@gmail.com>
Co-authored-by: Antoine Blanchard <antoine@sand-lab-gpu.mit.edu>
Co-authored-by: kae_mihara <rukamihara@outlook.com>
Co-authored-by: lagph <49130858+lagph@users.noreply.github.com>
Co-authored-by: Julien Bect <julien.bect@centralesupelec.fr>
Co-authored-by: Neil Lawrence <ndl21@cam.ac.uk>
Co-authored-by: bobturneruk <bob.turner.uk@gmail.com>
Co-authored-by: bobturneruk <r.d.turner@sheffield.ac.uk>
Co-authored-by: gehbiszumeis <16896724+gehbiszumeis@users.noreply.github.com>
This commit is contained in:
parent
92f2e87e7b
commit
fa909768bd
72 changed files with 8568 additions and 14545 deletions
6
.gitignore
vendored
6
.gitignore
vendored
|
|
@ -50,3 +50,9 @@ iterate.dat
|
|||
|
||||
# pycharm IDE stuff
|
||||
.idea/
|
||||
|
||||
# docs
|
||||
GPy*.rst
|
||||
|
||||
# vscode
|
||||
settings.json
|
||||
24
.readthedocs.yml
Normal file
24
.readthedocs.yml
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# .readthedocs.yml
|
||||
# Read the Docs configuration file
|
||||
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
|
||||
|
||||
# Required
|
||||
version: 2
|
||||
|
||||
# Build documentation in the docs/ directory with Sphinx
|
||||
sphinx:
|
||||
configuration: doc/source/conf.py
|
||||
|
||||
# Build documentation with MkDocs
|
||||
#mkdocs:
|
||||
# configuration: mkdocs.yml
|
||||
|
||||
# Optionally build your docs in additional formats such as PDF and ePub
|
||||
formats:
|
||||
- htmlzip
|
||||
|
||||
# Optionally set the version of Python and requirements required to build your docs
|
||||
python:
|
||||
version: 3.7
|
||||
install:
|
||||
- requirements: doc/source/requirements.txt
|
||||
|
|
@ -1,5 +1,7 @@
|
|||
sudo: false
|
||||
|
||||
osx_image: xcode12.2
|
||||
|
||||
os:
|
||||
- osx
|
||||
- linux
|
||||
|
|
@ -14,12 +16,11 @@ addons:
|
|||
# - "$HOME/install/"
|
||||
|
||||
env:
|
||||
- PYTHON_VERSION=2.7
|
||||
#- PYTHON_VERSION=3.3
|
||||
#- PYTHON_VERSION=3.4
|
||||
- PYTHON_VERSION=3.5
|
||||
- PYTHON_VERSION=3.6
|
||||
- PYTHON_VERSION=3.7
|
||||
- PYTHON_VERSION=3.8
|
||||
- PYTHON_VERSION=3.9
|
||||
|
||||
before_install:
|
||||
- wget https://github.com/mzwiessele/travis_scripts/raw/master/download_miniconda.sh
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
__version__ = "1.9.9"
|
||||
__version__ = "1.10.0"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,47 @@
|
|||
# Copyright (c) 2012-2014, GPy authors (see AUTHORS.txt).
|
||||
# Licensed under the BSD 3-clause license (see LICENSE.txt)
|
||||
|
||||
"""
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
This module contains the fundamental classes of GPy - classes that are
|
||||
inherited by objects in other parts of GPy in order to provide a
|
||||
consistent interface to major functionality.
|
||||
|
||||
.. inheritance-diagram:: GPy.core.gp.GP
|
||||
:top-classes: paramz.core.parameter_core.Parameterizable
|
||||
|
||||
:py:class:`GPy.core.model` is inherited by
|
||||
:py:class:`GPy.core.gp.GP`. And :py:class:`GPy.core.model` itself
|
||||
inherits :py:class:`paramz.model.Model` from the `paramz`
|
||||
package. `paramz` essentially provides an inherited set of properties
|
||||
and functions used to manage state (and state changes) of the model.
|
||||
|
||||
:py:class:`GPy.core.gp.GP` represents a GP model. Such an entity is
|
||||
typically passed variables representing known (x) and observed (y)
|
||||
data, along with a kernel and other information needed to create the
|
||||
specific model. It exposes functions which return information derived
|
||||
from the inputs to the model, for example predicting unobserved
|
||||
variables based on new known variables, or the log marginal likelihood
|
||||
of the current state of the model.
|
||||
|
||||
:py:func:`~GPy.core.gp.GP.optimize` is called to optimize
|
||||
hyperparameters of the model. The optimizer argument takes a string
|
||||
which is used to specify non-default optimization schemes.
|
||||
|
||||
Various plotting functions can be called against :py:class:`GPy.core.gp.GP`.
|
||||
|
||||
.. inheritance-diagram:: GPy.core.gp_grid.GpGrid GPy.core.sparse_gp.SparseGP GPy.core.sparse_gp_mpi.SparseGP_MPI GPy.core.svgp.SVGP
|
||||
:top-classes: GPy.core.gp.GP
|
||||
|
||||
:py:class:`GPy.core.gp.GP` is used as the basis for classes supporting
|
||||
more specialized types of Gaussian Process model. These are however
|
||||
generally still not specific enough to be called by the user and are
|
||||
inhereted by members of the :py:class:`GPy.models` package.
|
||||
|
||||
"""
|
||||
|
||||
from GPy.core.model import Model
|
||||
from .parameterization import Param, Parameterized
|
||||
from . import parameterization
|
||||
|
|
|
|||
|
|
@ -43,8 +43,6 @@ class GP(Model):
|
|||
self.X = X.copy()
|
||||
else: self.X = ObsAr(X)
|
||||
|
||||
self.num_data, self.input_dim = self.X.shape
|
||||
|
||||
assert Y.ndim == 2
|
||||
logger.info("initializing Y")
|
||||
|
||||
|
|
@ -199,6 +197,14 @@ class GP(Model):
|
|||
def _predictive_variable(self):
|
||||
return self.X
|
||||
|
||||
@property
|
||||
def num_data(self):
|
||||
return self.X.shape[0]
|
||||
|
||||
@property
|
||||
def input_dim(self):
|
||||
return self.X.shape[1]
|
||||
|
||||
def set_XY(self, X=None, Y=None):
|
||||
"""
|
||||
Set the input / output data of the model
|
||||
|
|
@ -235,6 +241,7 @@ class GP(Model):
|
|||
self.link_parameter(self.X, index=index)
|
||||
else:
|
||||
self.X = ObsAr(X)
|
||||
|
||||
self.update_model(True)
|
||||
|
||||
def set_X(self,X):
|
||||
|
|
@ -328,7 +335,7 @@ class GP(Model):
|
|||
of the output dimensions.
|
||||
|
||||
Note: If you want the predictive quantiles (e.g. 95% confidence
|
||||
interval) use :py:func:"~GPy.core.gp.GP.predict_quantiles".
|
||||
interval) use :py:func:`~GPy.core.gp.GP.predict_quantiles`.
|
||||
"""
|
||||
|
||||
# Predict the latent function values
|
||||
|
|
@ -377,7 +384,7 @@ class GP(Model):
|
|||
If full_cov and self.input_dim > 1, the return shape of var is Nnew x Nnew x self.input_dim. If self.input_dim == 1, the return shape is Nnew x Nnew.
|
||||
This is to allow for different normalizations of the output dimensions.
|
||||
|
||||
Note: If you want the predictive quantiles (e.g. 95% confidence interval) use :py:func:"~GPy.core.gp.GP.predict_quantiles".
|
||||
Note: If you want the predictive quantiles (e.g. 95% confidence interval) use :py:func:`~GPy.core.gp.GP.predict_quantiles`.
|
||||
"""
|
||||
return self.predict(Xnew, full_cov, Y_metadata, kern, None, False)
|
||||
|
||||
|
|
@ -451,6 +458,15 @@ class GP(Model):
|
|||
alpha = -2.*np.dot(kern.K(Xnew, self._predictive_variable),
|
||||
self.posterior.woodbury_inv)
|
||||
var_jac += kern.gradients_X(alpha, Xnew, self._predictive_variable)
|
||||
|
||||
if self.normalizer is not None:
|
||||
mean_jac = self.normalizer.inverse_mean(mean_jac) \
|
||||
- self.normalizer.inverse_mean(0.)
|
||||
if self.output_dim > 1:
|
||||
var_jac = self.normalizer.inverse_covariance(var_jac)
|
||||
else:
|
||||
var_jac = self.normalizer.inverse_variance(var_jac)
|
||||
|
||||
return mean_jac, var_jac
|
||||
|
||||
def predict_jacobian(self, Xnew, kern=None, full_cov=False):
|
||||
|
|
@ -587,9 +603,9 @@ class GP(Model):
|
|||
:param size: the number of a posteriori samples.
|
||||
:type size: int.
|
||||
:returns: set of simulations
|
||||
:rtype: np.ndarray (Nnew x D x samples)
|
||||
:rtype: np.ndarray (Nnew x D x samples)
|
||||
"""
|
||||
predict_kwargs["full_cov"] = True # Always use the full covariance for posterior samples.
|
||||
predict_kwargs["full_cov"] = True # Always use the full covariance for posterior samples.
|
||||
m, v = self._raw_predict(X, **predict_kwargs)
|
||||
if self.normalizer is not None:
|
||||
m, v = self.normalizer.inverse_mean(m), self.normalizer.inverse_variance(v)
|
||||
|
|
@ -711,11 +727,59 @@ class GP(Model):
|
|||
mu_star, var_star = self._raw_predict(x_test)
|
||||
return self.likelihood.log_predictive_density_sampling(y_test, mu_star, var_star, Y_metadata=Y_metadata, num_samples=num_samples)
|
||||
|
||||
def posterior_covariance_between_points(self, X1, X2):
|
||||
|
||||
def _raw_posterior_covariance_between_points(self, X1, X2):
|
||||
"""
|
||||
Computes the posterior covariance between points.
|
||||
Computes the posterior covariance between points. Does not account for
|
||||
normalization or likelihood
|
||||
|
||||
:param X1: some input observations
|
||||
:param X2: other input observations
|
||||
|
||||
:returns:
|
||||
cov: raw posterior covariance: k(X1,X2) - k(X1,X) G^{-1} K(X,X2)
|
||||
"""
|
||||
return self.posterior.covariance_between_points(self.kern, self.X, X1, X2)
|
||||
|
||||
|
||||
def posterior_covariance_between_points(self, X1, X2, Y_metadata=None,
|
||||
likelihood=None,
|
||||
include_likelihood=True):
|
||||
"""
|
||||
Computes the posterior covariance between points. Includes likelihood
|
||||
variance as well as normalization so that evaluation at (x,x) is consistent
|
||||
with model.predict
|
||||
|
||||
:param X1: some input observations
|
||||
:param X2: other input observations
|
||||
:param Y_metadata: metadata about the predicting point to pass to the
|
||||
likelihood
|
||||
:param include_likelihood: Whether or not to add likelihood noise to
|
||||
the predicted underlying latent function f.
|
||||
:type include_likelihood: bool
|
||||
|
||||
:returns:
|
||||
cov: posterior covariance, a Numpy array, Nnew x Nnew if
|
||||
self.output_dim == 1, and Nnew x Nnew x self.output_dim otherwise.
|
||||
"""
|
||||
|
||||
cov = self._raw_posterior_covariance_between_points(X1, X2)
|
||||
|
||||
if include_likelihood:
|
||||
# Predict latent mean and push through likelihood
|
||||
mean, _ = self._raw_predict(X1, full_cov=True)
|
||||
if likelihood is None:
|
||||
likelihood = self.likelihood
|
||||
_, cov = likelihood.predictive_values(mean, cov, full_cov=True,
|
||||
Y_metadata=Y_metadata)
|
||||
|
||||
if self.normalizer is not None:
|
||||
if self.output_dim > 1:
|
||||
cov = self.normalizer.inverse_covariance(cov)
|
||||
else:
|
||||
cov = self.normalizer.inverse_variance(cov)
|
||||
|
||||
return cov
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ class GpGrid(GP):
|
|||
|
||||
inference_method = gaussian_grid_inference.GaussianGridInference()
|
||||
|
||||
GP.__init__(self, X, Y, kernel, likelihood, inference_method=inference_method, name=name, Y_metadata=Y_metadata, normalizer=normalizer)
|
||||
super(GpGrid, self).__init__(X, Y, kernel, likelihood, inference_method=inference_method, name=name, Y_metadata=Y_metadata, normalizer=normalizer)
|
||||
self.posterior = None
|
||||
|
||||
def parameters_changed(self):
|
||||
|
|
|
|||
|
|
@ -1,3 +1,14 @@
|
|||
"""
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
Extends the functionality of the `paramz` package (dependency) to support paramterization of priors (:py:class:`GPy.core.parameterization.priors`).
|
||||
|
||||
.. inheritance-diagram:: GPy.core.parameterization.priors
|
||||
:top-classes: paramz.core.parameter_core.Parameterizable
|
||||
"""
|
||||
|
||||
|
||||
# Copyright (c) 2012-2014, GPy authors (see AUTHORS.txt).
|
||||
# Licensed under the BSD 3-clause license (see LICENSE.txt)
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class Prior(object):
|
|||
if not cls._instance or cls._instance.__class__ is not cls:
|
||||
newfunc = super(Prior, cls).__new__
|
||||
if newfunc is object.__new__:
|
||||
cls._instance = newfunc(cls)
|
||||
cls._instance = newfunc(cls)
|
||||
else:
|
||||
cls._instance = newfunc(cls, *args, **kwargs)
|
||||
return cls._instance
|
||||
|
|
@ -58,9 +58,9 @@ class Gaussian(Prior):
|
|||
return instance()
|
||||
newfunc = super(Prior, cls).__new__
|
||||
if newfunc is object.__new__:
|
||||
o = newfunc(cls)
|
||||
o = newfunc(cls)
|
||||
else:
|
||||
o = newfunc(cls, mu, sigma)
|
||||
o = newfunc(cls, mu, sigma)
|
||||
cls._instances.append(weakref.ref(o))
|
||||
return cls._instances[-1]()
|
||||
|
||||
|
|
@ -102,9 +102,9 @@ class Uniform(Prior):
|
|||
return instance()
|
||||
newfunc = super(Prior, cls).__new__
|
||||
if newfunc is object.__new__:
|
||||
o = newfunc(cls)
|
||||
o = newfunc(cls)
|
||||
else:
|
||||
o = newfunc(cls, lower, upper)
|
||||
o = newfunc(cls, lower, upper)
|
||||
cls._instances.append(weakref.ref(o))
|
||||
return cls._instances[-1]()
|
||||
|
||||
|
|
@ -282,7 +282,7 @@ class Gamma(Prior):
|
|||
return instance()
|
||||
newfunc = super(Prior, cls).__new__
|
||||
if newfunc is object.__new__:
|
||||
o = newfunc(cls)
|
||||
o = newfunc(cls)
|
||||
else:
|
||||
o = newfunc(cls, a, b)
|
||||
cls._instances.append(weakref.ref(o))
|
||||
|
|
@ -542,8 +542,8 @@ class DGPLVM(Prior):
|
|||
|
||||
"""
|
||||
domain = _REAL
|
||||
|
||||
def __new__(cls, sigma2, lbl, x_shape):
|
||||
|
||||
def __new__(cls, sigma2, lbl, x_shape):
|
||||
return super(Prior, cls).__new__(cls, sigma2, lbl, x_shape)
|
||||
|
||||
def __init__(self, sigma2, lbl, x_shape):
|
||||
|
|
@ -909,13 +909,13 @@ class DGPLVM_Lamda(Prior, Parameterized):
|
|||
# This function calculates log of our prior
|
||||
def lnpdf(self, x):
|
||||
x = x.reshape(self.x_shape)
|
||||
|
||||
#!!!!!!!!!!!!!!!!!!!!!!!!!!!
|
||||
#self.lamda.values[:] = self.lamda.values/self.lamda.values.sum()
|
||||
|
||||
#!!!!!!!!!!!!!!!!!!!!!!!!!!!
|
||||
#self.lamda.values[:] = self.lamda.values/self.lamda.values.sum()
|
||||
|
||||
xprime = x.dot(np.diagflat(self.lamda))
|
||||
x = xprime
|
||||
# print x
|
||||
# print x
|
||||
cls = self.compute_cls(x)
|
||||
M_0 = np.mean(x, axis=0)
|
||||
M_i = self.compute_Mi(cls)
|
||||
|
|
@ -932,7 +932,7 @@ class DGPLVM_Lamda(Prior, Parameterized):
|
|||
x = x.reshape(self.x_shape)
|
||||
xprime = x.dot(np.diagflat(self.lamda))
|
||||
x = xprime
|
||||
# print x
|
||||
# print x
|
||||
cls = self.compute_cls(x)
|
||||
M_0 = np.mean(x, axis=0)
|
||||
M_i = self.compute_Mi(cls)
|
||||
|
|
@ -964,14 +964,14 @@ class DGPLVM_Lamda(Prior, Parameterized):
|
|||
|
||||
# Because of the GPy we need to transpose our matrix so that it gets the same shape as out matrix (denominator layout!!!)
|
||||
DPxprim_Dx = DPxprim_Dx.T
|
||||
|
||||
|
||||
DPxprim_Dlamda = DPx_Dx.dot(x)
|
||||
|
||||
# Because of the GPy we need to transpose our matrix so that it gets the same shape as out matrix (denominator layout!!!)
|
||||
# Because of the GPy we need to transpose our matrix so that it gets the same shape as out matrix (denominator layout!!!)
|
||||
DPxprim_Dlamda = DPxprim_Dlamda.T
|
||||
|
||||
self.lamda.gradient = np.diag(DPxprim_Dlamda)
|
||||
# print DPxprim_Dx
|
||||
# print DPxprim_Dx
|
||||
return DPxprim_Dx
|
||||
|
||||
|
||||
|
|
@ -1046,7 +1046,7 @@ class DGPLVM_T(Prior):
|
|||
M_i = np.zeros((self.classnum, self.dim))
|
||||
for i in cls:
|
||||
# Mean of each class
|
||||
# class_i = np.multiply(cls[i],vec)
|
||||
# class_i = np.multiply(cls[i],vec)
|
||||
class_i = cls[i]
|
||||
M_i[i] = np.mean(class_i, axis=0)
|
||||
return M_i
|
||||
|
|
@ -1155,7 +1155,7 @@ class DGPLVM_T(Prior):
|
|||
x = x.reshape(self.x_shape)
|
||||
xprim = x.dot(self.vec)
|
||||
x = xprim
|
||||
# print x
|
||||
# print x
|
||||
cls = self.compute_cls(x)
|
||||
M_0 = np.mean(x, axis=0)
|
||||
M_i = self.compute_Mi(cls)
|
||||
|
|
@ -1163,7 +1163,7 @@ class DGPLVM_T(Prior):
|
|||
Sw = self.compute_Sw(cls, M_i)
|
||||
# Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))
|
||||
#Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)
|
||||
#print 'SB_inv: ', Sb_inv_N
|
||||
#print 'SB_inv: ', Sb_inv_N
|
||||
#Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]
|
||||
Sb_inv_N = pdinv(Sb+np.eye(Sb.shape[0])*0.1)[0]
|
||||
return (-1 / self.sigma2) * np.trace(Sb_inv_N.dot(Sw))
|
||||
|
|
@ -1172,8 +1172,8 @@ class DGPLVM_T(Prior):
|
|||
def lnpdf_grad(self, x):
|
||||
x = x.reshape(self.x_shape)
|
||||
xprim = x.dot(self.vec)
|
||||
x = xprim
|
||||
# print x
|
||||
x = xprim
|
||||
# print x
|
||||
cls = self.compute_cls(x)
|
||||
M_0 = np.mean(x, axis=0)
|
||||
M_i = self.compute_Mi(cls)
|
||||
|
|
@ -1188,7 +1188,7 @@ class DGPLVM_T(Prior):
|
|||
# Calculating inverse of Sb and its transpose and minus
|
||||
# Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))
|
||||
#Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)
|
||||
#print 'SB_inv: ',Sb_inv_N
|
||||
#print 'SB_inv: ',Sb_inv_N
|
||||
#Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]
|
||||
Sb_inv_N = pdinv(Sb+np.eye(Sb.shape[0])*0.1)[0]
|
||||
Sb_inv_N_trans = np.transpose(Sb_inv_N)
|
||||
|
|
@ -1375,4 +1375,5 @@ class StudentT(Prior):
|
|||
def rvs(self, n):
|
||||
from scipy.stats import t
|
||||
ret = t.rvs(self.nu, loc=self.mu, scale=self.sigma, size=n)
|
||||
return ret
|
||||
return ret
|
||||
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ class SparseGP(GP):
|
|||
self.Z = Param('inducing inputs', Z)
|
||||
self.num_inducing = Z.shape[0]
|
||||
|
||||
GP.__init__(self, X, Y, kernel, likelihood, mean_function, inference_method=inference_method, name=name, Y_metadata=Y_metadata, normalizer=normalizer)
|
||||
super(SparseGP, self).__init__(X, Y, kernel, likelihood, mean_function, inference_method=inference_method, name=name, Y_metadata=Y_metadata, normalizer=normalizer)
|
||||
|
||||
logger.info("Adding Z as parameter")
|
||||
self.link_parameter(self.Z, index=0)
|
||||
|
|
|
|||
|
|
@ -1,10 +1,16 @@
|
|||
# Copyright (c) 2012-2014, GPy authors (see AUTHORS.txt).
|
||||
# Licensed under the BSD 3-clause license (see LICENSE.txt)
|
||||
"""
|
||||
Examples for GPy.
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
The examples in this package usually depend on `pods <https://github.com/sods/ods>`_ so make sure
|
||||
you have that installed before running examples.
|
||||
you have that installed before running examples. The easiest way to do this is to run `pip install pods`. `pods` enables access to 3rd party data required for most of the examples.
|
||||
|
||||
The examples are executable and self-contained workflows in that they have their own source data, create their own models, kernels and other objects as needed, execute optimisation as required, and display output.
|
||||
|
||||
Viewing the source code of each model will clarify the steps taken in its execution, and may provide inspiration for developing of user-specific applications of `GPy`.
|
||||
|
||||
"""
|
||||
from . import classification
|
||||
from . import regression
|
||||
|
|
|
|||
|
|
@ -620,8 +620,10 @@ def multioutput_gp_with_derivative_observations():
|
|||
|
||||
# Then create the model, we give everything in lists, the order of the inputs indicates the order of the outputs
|
||||
# Now we have the regular observations first and derivative observations second, meaning that the kernels and
|
||||
# the likelihoods must follow the same order. Crosscovariances are automatically taken car of
|
||||
m = GPy.models.MultioutputGP(X_list=[x, xd], Y_list=[y, yd], kernel_list=[se, se_der], likelihood_list = [gauss, gauss])
|
||||
# the likelihoods must follow the same order. Crosscovariances are automatically taken care of
|
||||
m = GPy.models.MultioutputGP(X_list=[x, xd], Y_list=[y, yd],
|
||||
kernel_list=[se, se_der],
|
||||
likelihood_list=[gauss, gauss_der])
|
||||
|
||||
# Optimize the model
|
||||
m.optimize(messages=0, ipython_notebook=False)
|
||||
|
|
|
|||
|
|
@ -1,3 +1,11 @@
|
|||
"""
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
|
||||
|
||||
"""
|
||||
|
||||
from . import optimization
|
||||
from . import latent_function_inference
|
||||
from . import mcmc
|
||||
|
|
|
|||
|
|
@ -1,23 +1,30 @@
|
|||
# Copyright (c) 2012-2014, Max Zwiessele, James Hensman
|
||||
# Licensed under the BSD 3-clause license (see LICENSE.txt)
|
||||
|
||||
__doc__ = """
|
||||
"""
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
Certain :py:class:`GPy.models` can be instanciated with an `inference_method`. This submodule contains objects that can be assigned to `inference_method`.
|
||||
|
||||
Inference over Gaussian process latent functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In all our GP models, the consistency propery means that we have a Gaussian
|
||||
prior over a finite set of points f. This prior is
|
||||
In all our GP models, the consistency property means that we have a Gaussian
|
||||
prior over a finite set of points f. This prior is:
|
||||
|
||||
math:: N(f | 0, K)
|
||||
.. math::
|
||||
N(f | 0, K)
|
||||
|
||||
where K is the kernel matrix.
|
||||
where :math:`K` is the kernel matrix.
|
||||
|
||||
We also have a likelihood (see GPy.likelihoods) which defines how the data are
|
||||
related to the latent function: p(y | f). If the likelihood is also a Gaussian,
|
||||
the inference over f is tractable (see exact_gaussian_inference.py).
|
||||
We also have a likelihood (see :py:class:`GPy.likelihoods`) which defines how the data are
|
||||
related to the latent function: :math:`p(y | f)`. If the likelihood is also a Gaussian,
|
||||
the inference over :math:`f` is tractable (see :py:class:`GPy.inference.latent_function_inference.exact_gaussian_inference`).
|
||||
|
||||
If the likelihood object is something other than Gaussian, then exact inference
|
||||
is not tractable. We then resort to a Laplace approximation (laplace.py) or
|
||||
expectation propagation (ep.py).
|
||||
is not tractable. We then resort to a Laplace approximation (:py:class:`GPy.inference.latent_function_inference.laplace`) or
|
||||
expectation propagation (:py:class:`GPy.inference.latent_function_inference.expectation_propagation`).
|
||||
|
||||
The inference methods return a
|
||||
:class:`~GPy.inference.latent_function_inference.posterior.Posterior`
|
||||
|
|
|
|||
|
|
@ -142,8 +142,7 @@ class VarDTC(LatentFunctionInference):
|
|||
Cpsi1Vf, _ = dtrtrs(Lm, tmp, lower=1, trans=1)
|
||||
|
||||
# data fit and derivative of L w.r.t. Kmm
|
||||
dL_dm = -np.dot((_LBi_Lmi_psi1.T.dot(_LBi_Lmi_psi1))
|
||||
- np.eye(Y.shape[0]), VVT_factor)
|
||||
dL_dm = -_LBi_Lmi_psi1.T.dot(_LBi_Lmi_psi1.dot(VVT_factor)) + VVT_factor
|
||||
|
||||
delit = tdot(_LBi_Lmi_psi1Vf)
|
||||
data_fit = np.trace(delit)
|
||||
|
|
|
|||
|
|
@ -1,10 +1,36 @@
|
|||
"""
|
||||
Kernel module the kernels to sit in.
|
||||
|
||||
.. automodule:: .src
|
||||
:members:
|
||||
:private-members:
|
||||
"""
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
In terms of Gaussian Processes, a kernel is a function that specifies
|
||||
the degree of similarity between variables given their relative
|
||||
positions in parameter space. If known variables *x* and *x'* are
|
||||
close together then observed variables *y* and *y'* may also be
|
||||
similar, depending on the kernel function and its parameters. *Note:
|
||||
this may be too simple a definition for the broad range of kernels
|
||||
available in :py:class:`GPy`.*
|
||||
|
||||
:py:class:`GPy.kern.src.kern.Kern` is a generic kernel object
|
||||
inherited by more specific, end-user kernels used in models. It
|
||||
provides methods that specific kernels should generally have such as
|
||||
:py:class:`GPy.kern.src.kern.Kern.K` to compute the value of the
|
||||
kernel, :py:class:`GPy.kern.src.kern.Kern.add` to combine kernels and
|
||||
numerous functions providing information on kernel gradients.
|
||||
|
||||
There are several inherited types of kernel that provide a basis for specific end user kernels:
|
||||
|
||||
.. inheritance-diagram:: GPy.kern.src.kern.Kern GPy.kern.src.static GPy.kern.src.stationary GPy.kern.src.kern.CombinationKernel GPy.kern.src.brownian GPy.kern.src.linear GPy.kern.src.standard_periodic
|
||||
:top-classes: GPy.core.parameterization.parameterized.Parameterized
|
||||
|
||||
e.g. the archetype :py:class:`GPy.kern.RBF` does not inherit directly from :py:class:`GPy.kern.src.kern.Kern`, but from :py:class:`GPy.kern.src.stationary`.
|
||||
|
||||
.. inheritance-diagram:: GPy.kern.src.kern.Kern GPy.kern.RBF
|
||||
:top-classes: GPy.core.parameterization.parameterized.Parameterized
|
||||
|
||||
"""
|
||||
|
||||
|
||||
from .src.kern import Kern
|
||||
from .src.add import Add
|
||||
from .src.prod import Prod
|
||||
|
|
@ -45,4 +71,4 @@ from .src.sde_stationary import sde_RBF,sde_Exponential,sde_RatQuad
|
|||
from .src.sde_brownian import sde_Brownian
|
||||
from .src.multioutput_kern import MultioutputKern
|
||||
from .src.multioutput_derivative_kern import MultioutputDerivativeKern
|
||||
from .src.diff_kern import DiffKern
|
||||
from .src.diff_kern import DiffKern
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class ODE_t(Kern):
|
|||
self.link_parameters(self.a, self.c, self.variance_Yt, self.lengthscale_Yt,self.ubias)
|
||||
|
||||
def K(self, X, X2=None):
|
||||
"""Compute the covariance matrix between X and X2."""
|
||||
"""Compute the covariance matrix between X and X2."""
|
||||
X,slices = X[:,:-1],index_to_slices(X[:,-1])
|
||||
if X2 is None:
|
||||
X2,slices2 = X,slices
|
||||
|
|
@ -31,9 +31,9 @@ class ODE_t(Kern):
|
|||
|
||||
tdist = (X[:,0][:,None] - X2[:,0][None,:])**2
|
||||
ttdist = (X[:,0][:,None] - X2[:,0][None,:])
|
||||
|
||||
|
||||
vyt = self.variance_Yt
|
||||
|
||||
|
||||
lyt=1/(2*self.lengthscale_Yt)
|
||||
|
||||
a = -self.a
|
||||
|
|
@ -69,10 +69,10 @@ class ODE_t(Kern):
|
|||
lyt = 1./(2*self.lengthscale_Yt)
|
||||
|
||||
a = -self.a
|
||||
c = self.c
|
||||
|
||||
c = self.c
|
||||
|
||||
k1 = (2*lyt )*vyt
|
||||
|
||||
|
||||
Kdiag = np.zeros(X.shape[0])
|
||||
slices = index_to_slices(X[:,-1])
|
||||
|
||||
|
|
@ -106,7 +106,7 @@ class ODE_t(Kern):
|
|||
tdist = (X[:,0][:,None] - X2[:,0][None,:])**2
|
||||
ttdist = (X[:,0][:,None] - X2[:,0][None,:])
|
||||
#rdist = [tdist,xdist]
|
||||
|
||||
|
||||
rd=tdist.shape[0]
|
||||
|
||||
dka = np.zeros([rd,rd])
|
||||
|
|
@ -146,7 +146,7 @@ class ODE_t(Kern):
|
|||
elif i==1 and j==1:
|
||||
dkYdvart[ss1,ss2] = (k1(tdist[ss1,ss2]) + 1. )* kyy(tdist[ss1,ss2])
|
||||
dkYdlent[ss1,ss2] = vyt*dkyydlyt(tdist[ss1,ss2])*( k1(tdist[ss1,ss2]) + 1. ) +\
|
||||
vyt*kyy(tdist[ss1,ss2])*dk1dlyt(tdist[ss1,ss2])
|
||||
vyt*kyy(tdist[ss1,ss2])*dk1dlyt(tdist[ss1,ss2])
|
||||
dkdubias[ss1,ss2] = 1
|
||||
else:
|
||||
dkYdvart[ss1,ss2] = (-k4(ttdist[ss1,ss2])+1)*kyy(tdist[ss1,ss2])
|
||||
|
|
@ -156,10 +156,10 @@ class ODE_t(Kern):
|
|||
dkdubias[ss1,ss2] = 0
|
||||
#dkYdlent[ss1,ss2] = vyt*dkyydlyt(tdist[ss1,ss2])* (-2*lyt*(ttdist[ss1,ss2])+1.)+\
|
||||
#vyt*kyy(tdist[ss1,ss2])*(-2)*(ttdist[ss1,ss2])
|
||||
|
||||
|
||||
self.variance_Yt.gradient = np.sum(dkYdvart * dL_dK)
|
||||
|
||||
self.lengthscale_Yt.gradient = np.sum(dkYdlent*(-0.5*self.lengthscale_Yt**(-2)) * dL_dK)
|
||||
|
||||
self.ubias.gradient = np.sum(dkdubias * dL_dK)
|
||||
self.ubias.gradient = np.sum(dkdubias * dL_dK)
|
||||
|
||||
|
|
|
|||
|
|
@ -1 +1,2 @@
|
|||
|
||||
from . import psi_comp
|
||||
|
|
|
|||
|
|
@ -20,12 +20,13 @@ class Coregionalize(Kern):
|
|||
Covariance function for intrinsic/linear coregionalization models
|
||||
|
||||
This covariance has the form:
|
||||
|
||||
.. math::
|
||||
\mathbf{B} = \mathbf{W}\mathbf{W}^\top + \text{diag}(kappa)
|
||||
\mathbf{B} = \mathbf{W}\mathbf{W}^\intercal + \mathrm{diag}(kappa)
|
||||
|
||||
An intrinsic/linear coregionalization covariance function of the form:
|
||||
.. math::
|
||||
|
||||
.. math::
|
||||
k_2(x, y)=\mathbf{B} k(x, y)
|
||||
|
||||
it is obtained as the tensor product between a covariance function
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -10,67 +10,67 @@ from paramz.caching import Cache_this
|
|||
|
||||
class GridKern(Stationary):
|
||||
|
||||
def __init__(self, input_dim, variance, lengthscale, ARD, active_dims, name, originalDimensions, useGPU=False):
|
||||
super(GridKern, self).__init__(input_dim, variance, lengthscale, ARD, active_dims, name, useGPU=useGPU)
|
||||
self.originalDimensions = originalDimensions
|
||||
def __init__(self, input_dim, variance, lengthscale, ARD, active_dims, name, originalDimensions, useGPU=False):
|
||||
super(GridKern, self).__init__(input_dim, variance, lengthscale, ARD, active_dims, name, useGPU=useGPU)
|
||||
self.originalDimensions = originalDimensions
|
||||
|
||||
@Cache_this(limit=3, ignore_args=())
|
||||
def dKd_dVar(self, X, X2=None):
|
||||
"""
|
||||
Derivative of Kernel function wrt variance applied on inputs X and X2.
|
||||
In the stationary case there is an inner function depending on the
|
||||
distances from X to X2, called r.
|
||||
@Cache_this(limit=3, ignore_args=())
|
||||
def dKd_dVar(self, X, X2=None):
|
||||
"""
|
||||
Derivative of Kernel function wrt variance applied on inputs X and X2.
|
||||
In the stationary case there is an inner function depending on the
|
||||
distances from X to X2, called r.
|
||||
|
||||
dKd_dVar(X, X2) = dKdVar_of_r((X-X2)**2)
|
||||
"""
|
||||
r = self._scaled_dist(X, X2)
|
||||
return self.dKdVar_of_r(r)
|
||||
dKd_dVar(X, X2) = dKdVar_of_r((X-X2)**2)
|
||||
"""
|
||||
r = self._scaled_dist(X, X2)
|
||||
return self.dKdVar_of_r(r)
|
||||
|
||||
@Cache_this(limit=3, ignore_args=())
|
||||
def dKd_dLen(self, X, dimension, lengthscale, X2=None):
|
||||
"""
|
||||
Derivate of Kernel function wrt lengthscale applied on inputs X and X2.
|
||||
In the stationary case there is an inner function depending on the
|
||||
distances from X to X2, called r.
|
||||
@Cache_this(limit=3, ignore_args=())
|
||||
def dKd_dLen(self, X, dimension, lengthscale, X2=None):
|
||||
"""
|
||||
Derivate of Kernel function wrt lengthscale applied on inputs X and X2.
|
||||
In the stationary case there is an inner function depending on the
|
||||
distances from X to X2, called r.
|
||||
|
||||
dKd_dLen(X, X2) = dKdLen_of_r((X-X2)**2)
|
||||
"""
|
||||
r = self._scaled_dist(X, X2)
|
||||
return self.dKdLen_of_r(r, dimension, lengthscale)
|
||||
dKd_dLen(X, X2) = dKdLen_of_r((X-X2)**2)
|
||||
"""
|
||||
r = self._scaled_dist(X, X2)
|
||||
return self.dKdLen_of_r(r, dimension, lengthscale)
|
||||
|
||||
class GridRBF(GridKern):
|
||||
"""
|
||||
Similar to regular RBF but supplemented with methods required for Gaussian grid regression
|
||||
Radial Basis Function kernel, aka squared-exponential, exponentiated quadratic or Gaussian kernel:
|
||||
"""
|
||||
Similar to regular RBF but supplemented with methods required for Gaussian grid regression
|
||||
Radial Basis Function kernel, aka squared-exponential, exponentiated quadratic or Gaussian kernel:
|
||||
|
||||
.. math::
|
||||
.. math::
|
||||
|
||||
k(r) = \sigma^2 \exp \\bigg(- \\frac{1}{2} r^2 \\bigg)
|
||||
k(r) = \sigma^2 \exp \\bigg(- \\frac{1}{2} r^2 \\bigg)
|
||||
|
||||
"""
|
||||
_support_GPU = True
|
||||
def __init__(self, input_dim, variance=1., lengthscale=None, ARD=False, active_dims=None, name='gridRBF', originalDimensions=1, useGPU=False):
|
||||
super(GridRBF, self).__init__(input_dim, variance, lengthscale, ARD, active_dims, name, originalDimensions, useGPU=useGPU)
|
||||
"""
|
||||
_support_GPU = True
|
||||
def __init__(self, input_dim, variance=1., lengthscale=None, ARD=False, active_dims=None, name='gridRBF', originalDimensions=1, useGPU=False):
|
||||
super(GridRBF, self).__init__(input_dim, variance, lengthscale, ARD, active_dims, name, originalDimensions, useGPU=useGPU)
|
||||
|
||||
def K_of_r(self, r):
|
||||
return (self.variance**(float(1)/self.originalDimensions)) * np.exp(-0.5 * r**2)
|
||||
def K_of_r(self, r):
|
||||
return (self.variance**(float(1)/self.originalDimensions)) * np.exp(-0.5 * r**2)
|
||||
|
||||
def dKdVar_of_r(self, r):
|
||||
"""
|
||||
Compute derivative of kernel wrt variance
|
||||
"""
|
||||
return np.exp(-0.5 * r**2)
|
||||
def dKdVar_of_r(self, r):
|
||||
"""
|
||||
Compute derivative of kernel wrt variance
|
||||
"""
|
||||
return np.exp(-0.5 * r**2)
|
||||
|
||||
def dKdLen_of_r(self, r, dimCheck, lengthscale):
|
||||
"""
|
||||
Compute derivative of kernel for dimension wrt lengthscale
|
||||
Computation of derivative changes when lengthscale corresponds to
|
||||
the dimension of the kernel whose derivate is being computed.
|
||||
"""
|
||||
if (dimCheck == True):
|
||||
return (self.variance**(float(1)/self.originalDimensions)) * np.exp(-0.5 * r**2) * (r**2) / (lengthscale**(float(1)/self.originalDimensions))
|
||||
else:
|
||||
return (self.variance**(float(1)/self.originalDimensions)) * np.exp(-0.5 * r**2) / (lengthscale**(float(1)/self.originalDimensions))
|
||||
def dKdLen_of_r(self, r, dimCheck, lengthscale):
|
||||
"""
|
||||
Compute derivative of kernel for dimension wrt lengthscale
|
||||
Computation of derivative changes when lengthscale corresponds to
|
||||
the dimension of the kernel whose derivate is being computed.
|
||||
"""
|
||||
if (dimCheck == True):
|
||||
return (self.variance**(float(1)/self.originalDimensions)) * np.exp(-0.5 * r**2) * (r**2) / (lengthscale**(float(1)/self.originalDimensions))
|
||||
else:
|
||||
return (self.variance**(float(1)/self.originalDimensions)) * np.exp(-0.5 * r**2) / (lengthscale**(float(1)/self.originalDimensions))
|
||||
|
||||
def dK_dr(self, r):
|
||||
return -r*self.K_of_r(r)
|
||||
def dK_dr(self, r):
|
||||
return -r*self.K_of_r(r)
|
||||
|
|
|
|||
|
|
@ -434,7 +434,14 @@ class CombinationKernel(Kern):
|
|||
[setitem(i_s, (i, k._all_dims_active), k.input_sensitivity(summarize)) for i, k in enumerate(parts)]
|
||||
return i_s
|
||||
else:
|
||||
raise NotImplementedError("Choose the kernel you want to get the sensitivity for. You need to override the default behaviour for getting the input sensitivity to be able to get the input sensitivity. For sum kernel it is the sum of all sensitivities, TODO: product kernel? Other kernels?, also TODO: shall we return all the sensitivities here in the combination kernel? So we can combine them however we want? This could lead to just plot all the sensitivities here...")
|
||||
raise NotImplementedError("Choose the kernel you want to get the sensitivity for. "
|
||||
"You need to override the default behaviour for getting "
|
||||
"the input sensitivity to be able to get the input sensitivity. "
|
||||
"For sum kernel it is the sum of all sensitivities, "
|
||||
"TODO: product kernel? Other kernels?, also "
|
||||
"TODO: shall we return all the sensitivities here in the combination "
|
||||
"kernel? So we can combine them however we want? "
|
||||
"This could lead to just plot all the sensitivities here...")
|
||||
|
||||
def _check_active_dims(self, X):
|
||||
return
|
||||
|
|
|
|||
|
|
@ -448,12 +448,34 @@ class PSICOMP_SSRBF_GPU(PSICOMP_RBF):
|
|||
dL_dpsi0_sum = dL_dpsi0.sum()
|
||||
|
||||
self.reset_derivative()
|
||||
# t=self.g_psi1compDer(dvar_gpu,dl_gpu,dZ_gpu,dmu_gpu,dS_gpu,dL_dpsi1_gpu,psi1_gpu, np.float64(variance),l_gpu,Z_gpu,mu_gpu,S_gpu, np.int32(N), np.int32(M), np.int32(Q), block=(self.threadnum,1,1), grid=(self.blocknum,1),time_kernel=True)
|
||||
# t=self.g_psi1compDer(dvar_gpu,dl_gpu,dZ_gpu,dmu_gpu,dS_gpu,dL_dpsi1_gpu,psi1_gpu,
|
||||
# np.float64(variance),l_gpu,Z_gpu,mu_gpu,S_gpu, np.int32(N),
|
||||
# np.int32(M), np.int32(Q), block=(self.threadnum,1,1),
|
||||
# grid=(self.blocknum,1),time_kernel=True)
|
||||
# print 'g_psi1compDer '+str(t)
|
||||
# t=self.g_psi2compDer(dvar_gpu,dl_gpu,dZ_gpu,dmu_gpu,dS_gpu,dL_dpsi2_gpu,psi2n_gpu, np.float64(variance),l_gpu,Z_gpu,mu_gpu,S_gpu, np.int32(N), np.int32(M), np.int32(Q), block=(self.threadnum,1,1), grid=(self.blocknum,1),time_kernel=True)
|
||||
# t=self.g_psi2compDer(dvar_gpu,dl_gpu,dZ_gpu,dmu_gpu,dS_gpu,dL_dpsi2_gpu,psi2n_gpu,
|
||||
# np.float64(variance),l_gpu,Z_gpu,mu_gpu,S_gpu, np.int32(N),
|
||||
# np.int32(M), np.int32(Q), block=(self.threadnum,1,1),
|
||||
# grid=(self.blocknum,1),time_kernel=True)
|
||||
# print 'g_psi2compDer '+str(t)
|
||||
self.g_psi1compDer.prepared_call((self.blocknum,1),(self.threadnum,1,1),dvar_gpu.gpudata,dl_gpu.gpudata,dZ_gpu.gpudata,dmu_gpu.gpudata,dS_gpu.gpudata,dgamma_gpu.gpudata,dL_dpsi1_gpu.gpudata,psi1_gpu.gpudata, log_denom1_gpu.gpudata, log_gamma_gpu.gpudata, log_gamma1_gpu.gpudata, np.float64(variance),l_gpu.gpudata,Z_gpu.gpudata,mu_gpu.gpudata,S_gpu.gpudata,gamma_gpu.gpudata,np.int32(N), np.int32(M), np.int32(Q))
|
||||
self.g_psi2compDer.prepared_call((self.blocknum,1),(self.threadnum,1,1),dvar_gpu.gpudata,dl_gpu.gpudata,dZ_gpu.gpudata,dmu_gpu.gpudata,dS_gpu.gpudata,dgamma_gpu.gpudata,dL_dpsi2_gpu.gpudata,psi2n_gpu.gpudata, log_denom2_gpu.gpudata, log_gamma_gpu.gpudata, log_gamma1_gpu.gpudata, np.float64(variance),l_gpu.gpudata,Z_gpu.gpudata,mu_gpu.gpudata,S_gpu.gpudata,gamma_gpu.gpudata,np.int32(N), np.int32(M), np.int32(Q))
|
||||
self.g_psi1compDer.prepared_call((self.blocknum,1), (self.threadnum,1,1),
|
||||
dvar_gpu.gpudata, dl_gpu.gpudata, dZ_gpu.gpudata,
|
||||
dmu_gpu.gpudata, dS_gpu.gpudata, dgamma_gpu.gpudata,
|
||||
dL_dpsi1_gpu.gpudata, psi1_gpu.gpudata,
|
||||
log_denom1_gpu.gpudata, log_gamma_gpu.gpudata,
|
||||
log_gamma1_gpu.gpudata, np.float64(variance),
|
||||
l_gpu.gpudata, Z_gpu.gpudata, mu_gpu.gpudata,
|
||||
S_gpu.gpudata, gamma_gpu.gpudata, np.int32(N),
|
||||
np.int32(M), np.int32(Q))
|
||||
self.g_psi2compDer.prepared_call((self.blocknum,1), (self.threadnum,1,1),
|
||||
dvar_gpu.gpudata, dl_gpu.gpudata, dZ_gpu.gpudata,
|
||||
dmu_gpu.gpudata, dS_gpu.gpudata, dgamma_gpu.gpudata,
|
||||
dL_dpsi2_gpu.gpudata, psi2n_gpu.gpudata,
|
||||
log_denom2_gpu.gpudata, log_gamma_gpu.gpudata,
|
||||
log_gamma1_gpu.gpudata, np.float64(variance),
|
||||
l_gpu.gpudata, Z_gpu.gpudata, mu_gpu.gpudata,
|
||||
S_gpu.gpudata, gamma_gpu.gpudata, np.int32(N),
|
||||
np.int32(M), np.int32(Q))
|
||||
|
||||
dL_dvar = dL_dpsi0_sum + gpuarray.sum(dvar_gpu).get()
|
||||
sum_axis(grad_mu_gpu,dmu_gpu,N*Q,self.blocknum)
|
||||
|
|
@ -468,7 +490,6 @@ class PSICOMP_SSRBF_GPU(PSICOMP_RBF):
|
|||
dL_dlengscale = grad_l_gpu.get()
|
||||
else:
|
||||
dL_dlengscale = gpuarray.sum(dl_gpu).get()
|
||||
|
||||
return dL_dvar, dL_dlengscale, dL_dZ, dL_dmu, dL_dS, dL_dgamma
|
||||
|
||||
|
||||
return dL_dvar, dL_dlengscale, dL_dZ, dL_dmu, dL_dS, dL_dgamma
|
||||
|
||||
|
|
|
|||
|
|
@ -412,7 +412,6 @@ class Exponential(Stationary):
|
|||
# return (F, L, Qc, H, Pinf)
|
||||
|
||||
|
||||
|
||||
class OU(Stationary):
|
||||
"""
|
||||
OU kernel:
|
||||
|
|
@ -426,6 +425,23 @@ class OU(Stationary):
|
|||
def __init__(self, input_dim, variance=1., lengthscale=None, ARD=False, active_dims=None, name='OU'):
|
||||
super(OU, self).__init__(input_dim, variance, lengthscale, ARD, active_dims, name)
|
||||
|
||||
def to_dict(self):
|
||||
"""
|
||||
Convert the object into a json serializable dictionary.
|
||||
|
||||
Note: It uses the private method _save_to_input_dict of the parent.
|
||||
|
||||
:return dict: json serializable dictionary containing the needed information to instantiate the object
|
||||
"""
|
||||
input_dict = super(OU, self)._save_to_input_dict()
|
||||
input_dict["class"] = "GPy.kern.OU"
|
||||
return input_dict
|
||||
|
||||
@staticmethod
|
||||
def _build_from_input_dict(kernel_class, input_dict):
|
||||
useGPU = input_dict.pop('useGPU', None)
|
||||
return OU(**input_dict)
|
||||
|
||||
def K_of_r(self, r):
|
||||
return self.variance * np.exp(-r)
|
||||
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -1,3 +1,27 @@
|
|||
"""Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
The likelihood is :math:`p(y|f,X)` which is how well we will predict
|
||||
target values given inputs :math:`X` and our latent function :math:`f`
|
||||
(:math:`y` without noise). Marginal likelihood :math:`p(y|X)`, is the
|
||||
same as likelihood except we marginalize out the model :math:`f`. The
|
||||
importance of likelihoods in Gaussian Processes is in determining the
|
||||
'best' values of kernel and noise hyperparamters to relate known,
|
||||
observed and unobserved data. The purpose of optimizing a model
|
||||
(e.g. :py:class:`GPy.models.GPRegression`) is to determine the 'best'
|
||||
hyperparameters i.e. those that minimize negative log marginal
|
||||
likelihood.
|
||||
|
||||
.. inheritance-diagram:: GPy.likelihoods.likelihood GPy.likelihoods.mixed_noise.MixedNoise
|
||||
:top-classes: GPy.core.parameterization.parameterized.Parameterized
|
||||
|
||||
Most likelihood classes inherit directly from
|
||||
:py:class:`GPy.likelihoods.likelihood`, although an intermediary class
|
||||
:py:class:`GPy.likelihoods.mixed_noise.MixedNoise` is used by
|
||||
:py:class:`GPy.likelihoods.multioutput_likelihood`.
|
||||
|
||||
"""
|
||||
|
||||
from .bernoulli import Bernoulli
|
||||
from .exponential import Exponential
|
||||
from .gaussian import Gaussian, HeteroscedasticGaussian
|
||||
|
|
@ -9,4 +33,4 @@ from .mixed_noise import MixedNoise
|
|||
from .binomial import Binomial
|
||||
from .weibull import Weibull
|
||||
from .loglogistic import LogLogistic
|
||||
from .multioutput_likelihood import MultioutputLikelihood
|
||||
from .multioutput_likelihood import MultioutputLikelihood
|
||||
|
|
|
|||
|
|
@ -218,7 +218,7 @@ class Likelihood(Parameterized):
|
|||
#fi_samples = np.random.randn(num_samples)*np.sqrt(var_star) + mu_star
|
||||
fi_samples = np.random.normal(mu_star, np.sqrt(var_star), size=(mu_star.shape[0], num_samples))
|
||||
|
||||
from scipy.misc import logsumexp
|
||||
from scipy.special import logsumexp
|
||||
log_p_ystar = -np.log(num_samples) + logsumexp(self.logpdf(fi_samples, y_test, Y_metadata=Y_metadata), axis=1)
|
||||
log_p_ystar = np.array(log_p_ystar).reshape(*y_test.shape)
|
||||
return log_p_ystar
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ class Compound(Mapping):
|
|||
def __init__(self, mapping1, mapping2):
|
||||
assert(mapping1.output_dim==mapping2.input_dim)
|
||||
input_dim, output_dim = mapping1.input_dim, mapping2.output_dim
|
||||
Mapping.__init__(self, input_dim=input_dim, output_dim=output_dim)
|
||||
super(Compound, self).__init__(input_dim=input_dim, output_dim=output_dim)
|
||||
self.mapping1 = mapping1
|
||||
self.mapping2 = mapping2
|
||||
self.link_parameters(self.mapping1, self.mapping2)
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ class Constant(Mapping):
|
|||
"""
|
||||
|
||||
def __init__(self, input_dim, output_dim, value=0., name='constmap'):
|
||||
Mapping.__init__(self, input_dim=input_dim, output_dim=output_dim, name=name)
|
||||
super(Constant, self).__init__(input_dim=input_dim, output_dim=output_dim, name=name)
|
||||
value = np.atleast_1d(value)
|
||||
if not len(value.shape) ==1:
|
||||
raise ValueError("bad constant values: pass a float or flat vectoor")
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ class Identity(Mapping):
|
|||
A mapping that does nothing!
|
||||
"""
|
||||
def __init__(self, input_dim, output_dim, name='identity'):
|
||||
Mapping.__init__(self, input_dim, output_dim, name)
|
||||
super(Identity, self).__init__(input_dim, output_dim, name)
|
||||
|
||||
def f(self, X):
|
||||
return X
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ class Kernel(Mapping):
|
|||
"""
|
||||
|
||||
def __init__(self, input_dim, output_dim, Z, kernel, name='kernmap'):
|
||||
Mapping.__init__(self, input_dim=input_dim, output_dim=output_dim, name=name)
|
||||
super(Kernel, self).__init__(input_dim=input_dim, output_dim=output_dim, name=name)
|
||||
self.kern = kernel
|
||||
self.Z = Z
|
||||
self.num_bases, Zdim = Z.shape
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ class PiecewiseLinear(Mapping):
|
|||
assert input_dim==1
|
||||
assert output_dim==1
|
||||
|
||||
Mapping.__init__(self, input_dim, output_dim, name)
|
||||
super(PiecewiseLinear, self).__init__(input_dim, output_dim, name)
|
||||
|
||||
values, breaks = np.array(values).flatten(), np.array(breaks).flatten()
|
||||
assert values.size == breaks.size
|
||||
|
|
|
|||
|
|
@ -1,6 +1,29 @@
|
|||
# Copyright (c) 2012, GPy authors (see AUTHORS.txt).
|
||||
# Licensed under the BSD 3-clause license (see LICENSE.txt)
|
||||
|
||||
"""
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
This package principally contains classes ultimately inherited from :py:class:`GPy.core.gp.GP` intended as models for end user consuption - much of :py:class:`GPy.core.gp.GP` is not intended to be called directly. The general form of a "model" is a function that takes some data, a kernel (see :py:class:`GPy.kern`) and other parameters, returning an object representation.
|
||||
|
||||
Several models directly inherit :py:class:`GPy.core.gp.GP`:
|
||||
|
||||
.. inheritance-diagram:: GPy.models.gp_classification GPy.models.gp_coregionalized_regression GPy.models.gp_heteroscedastic_regression GPy.models.gp_offset_regression GPy.models.gp_regression GPy.models.gp_var_gauss GPy.models.gplvm GPy.models.input_warped_gp GPy.models.multioutput_gp
|
||||
:top-classes: GPy.core.gp.GP
|
||||
|
||||
Some models fall into conceptually related groups of models (e.g. :py:class:`GPy.core.sparse_gp`, :py:class:`GPy.core.sparse_gp_mpi`):
|
||||
|
||||
.. inheritance-diagram:: GPy.models.bayesian_gplvm GPy.models.bayesian_gplvm_minibatch GPy.models.gp_multiout_regression GPy.models.gp_multiout_regression_md GPy.models.ibp_lfm.IBPLFM GPy.models.sparse_gp_coregionalized_regression GPy.models.sparse_gp_minibatch GPy.models.sparse_gp_regression GPy.models.sparse_gp_regression_md GPy.models.sparse_gplvm
|
||||
:top-classes: GPy.core.gp.GP
|
||||
|
||||
In some cases one end-user model inherits another e.g.
|
||||
|
||||
.. inheritance-diagram:: GPy.models.bayesian_gplvm_minibatch
|
||||
:top-classes: GPy.models.sparse_gp_minibatch.SparseGPMiniBatch
|
||||
|
||||
"""
|
||||
|
||||
from .gp_regression import GPRegression
|
||||
from .gp_classification import GPClassification
|
||||
from .sparse_gp_regression import SparseGPRegression
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ class BCGPLVM(GPLVM):
|
|||
else:
|
||||
assert mapping.input_dim==Y.shape[1], "mapping input dim does not work for Y dimension"
|
||||
assert mapping.output_dim==input_dim, "mapping output dim does not work for self.input_dim"
|
||||
GPLVM.__init__(self, Y, input_dim, X=mapping.f(Y), kernel=kernel, name="bcgplvm")
|
||||
super(BCGPLVM, self).__init__(Y, input_dim, X=mapping.f(Y), kernel=kernel, name="bcgplvm")
|
||||
self.unlink_parameter(self.X)
|
||||
self.mapping = mapping
|
||||
self.link_parameter(self.mapping)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
# Copyright (c) 2015 the GPy Austhors (see AUTHORS.txt)
|
||||
# Copyright (c) 2015 the GPy Authors (see AUTHORS.txt)
|
||||
# Licensed under the BSD 3-clause license (see LICENSE.txt)
|
||||
|
||||
from .bayesian_gplvm import BayesianGPLVM
|
||||
|
|
@ -11,6 +11,11 @@ class DPBayesianGPLVM(BayesianGPLVM):
|
|||
Z=None, kernel=None, inference_method=None, likelihood=None,
|
||||
name='bayesian gplvm', mpi_comm=None, normalizer=None,
|
||||
missing_data=False, stochastic=False, batchsize=1):
|
||||
super(DPBayesianGPLVM,self).__init__(Y=Y, input_dim=input_dim, X=X, X_variance=X_variance, init=init, num_inducing=num_inducing, Z=Z, kernel=kernel, inference_method=inference_method, likelihood=likelihood, mpi_comm=mpi_comm, normalizer=normalizer, missing_data=missing_data, stochastic=stochastic, batchsize=batchsize, name='dp bayesian gplvm')
|
||||
super(DPBayesianGPLVM,self).__init__(Y=Y, input_dim=input_dim, X=X, X_variance=X_variance,
|
||||
init=init, num_inducing=num_inducing, Z=Z, kernel=kernel,
|
||||
inference_method=inference_method, likelihood=likelihood,
|
||||
mpi_comm=mpi_comm, normalizer=normalizer,
|
||||
missing_data=missing_data, stochastic=stochastic,
|
||||
batchsize=batchsize, name='dp bayesian gplvm')
|
||||
self.X.mean.set_prior(X_prior)
|
||||
self.link_parameter(X_prior)
|
||||
|
|
|
|||
|
|
@ -35,8 +35,8 @@ class GPClassification(GP):
|
|||
if inference_method is None:
|
||||
inference_method = EP()
|
||||
|
||||
GP.__init__(self, X=X, Y=Y, kernel=kernel, likelihood=likelihood, inference_method=inference_method,
|
||||
mean_function=mean_function, name='gp_classification', normalizer=normalizer)
|
||||
super(GPClassification, self).__init__(X=X, Y=Y, kernel=kernel, likelihood=likelihood, inference_method=inference_method,
|
||||
mean_function=mean_function, name='gp_classification', normalizer=normalizer)
|
||||
|
||||
@staticmethod
|
||||
def from_gp(gp):
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ class GPCoregionalizedRegression(GP):
|
|||
if kernel is None:
|
||||
kernel = kern.RBF(X.shape[1]-1)
|
||||
|
||||
kernel = util.multioutput.ICM(input_dim=X.shape[1]-1, num_outputs=Ny, kernel=kernel, W_rank=1,name=kernel_name)
|
||||
kernel = util.multioutput.ICM(input_dim=X.shape[1]-1, num_outputs=Ny, kernel=kernel, W_rank=W_rank,name=kernel_name)
|
||||
|
||||
#Likelihood
|
||||
likelihood = util.multioutput.build_likelihood(Y_list,self.output_index,likelihoods_list)
|
||||
|
|
|
|||
|
|
@ -18,18 +18,15 @@ class GPKroneckerGaussianRegression(Model):
|
|||
|
||||
The noise must be iid Gaussian.
|
||||
|
||||
See Stegle et al.
|
||||
@inproceedings{stegle2011efficient,
|
||||
title={Efficient inference in matrix-variate gaussian models with $\\backslash$ iid observation noise},
|
||||
author={Stegle, Oliver and Lippert, Christoph and Mooij, Joris M and Lawrence, Neil D and Borgwardt, Karsten M},
|
||||
booktitle={Advances in Neural Information Processing Systems},
|
||||
pages={630--638},
|
||||
year={2011}
|
||||
}
|
||||
See [stegle_et_al_2011]_.
|
||||
|
||||
.. rubric:: References
|
||||
|
||||
.. [stegle_et_al_2011] Stegle, O.; Lippert, C.; Mooij, J.M.; Lawrence, N.D.; Borgwardt, K.:Efficient inference in matrix-variate Gaussian models with \iid observation noise. In: Advances in Neural Information Processing Systems, 2011, Pages 630-638
|
||||
|
||||
"""
|
||||
def __init__(self, X1, X2, Y, kern1, kern2, noise_var=1., name='KGPR'):
|
||||
Model.__init__(self, name=name)
|
||||
super(GPKroneckerGaussianRegression, self).__init__(name=name)
|
||||
|
||||
# accept the construction arguments
|
||||
self.X1 = ObsAr(X1)
|
||||
|
|
|
|||
|
|
@ -15,9 +15,11 @@ class GPMultioutRegression(SparseGP):
|
|||
"""
|
||||
Gaussian Process model for multi-output regression without missing data
|
||||
|
||||
This is an implementation of Latent Variable Multiple Output Gaussian Processes (LVMOGP) in [Dai et al. 2017].
|
||||
This is an implementation of Latent Variable Multiple Output Gaussian Processes (LVMOGP) in [Dai_et_al_2017]_.
|
||||
|
||||
Zhenwen Dai, Mauricio A. Alvarez and Neil D. Lawrence. Efficient Modeling of Latent Information in Supervised Learning using Gaussian Processes. In NIPS, 2017.
|
||||
.. rubric:: References
|
||||
|
||||
.. [Dai_et_al_2017] Dai, Z.; Alvarez, M.A.; Lawrence, N.D: Efficient Modeling of Latent Information in Supervised Learning using Gaussian Processes. In NIPS, 2017.
|
||||
|
||||
:param X: input observations.
|
||||
:type X: numpy.ndarray
|
||||
|
|
@ -42,6 +44,7 @@ class GPMultioutRegression(SparseGP):
|
|||
:param int qU_var_c_W_dim: the dimensionality of the covariance of q(U) for the GP regression. If it is smaller than the number of inducing points, it represents a low-rank parameterization of the covariance matrix.
|
||||
:param str init: the choice of initialization: 'GP' or 'rand'. With 'rand', the model is initialized randomly. With 'GP', the model is initialized through a protocol as follows: (1) fits a sparse GP (2) fits a BGPLVM based on the outcome of sparse GP (3) initialize the model based on the outcome of the BGPLVM.
|
||||
:param str name: the name of the model
|
||||
|
||||
"""
|
||||
def __init__(self, X, Y, Xr_dim, kernel=None, kernel_row=None, Z=None, Z_row=None, X_row=None, Xvariance_row=None, num_inducing=(10,10), qU_var_r_W_dim=None, qU_var_c_W_dim=None, init='GP', name='GPMR'):
|
||||
|
||||
|
|
|
|||
|
|
@ -13,12 +13,22 @@ from ..util.linalg import tdot
|
|||
from .sparse_gp_regression_md import SparseGPRegressionMD
|
||||
|
||||
class GPMultioutRegressionMD(SparseGP):
|
||||
"""
|
||||
Gaussian Process model for multi-output regression with missing data
|
||||
"""Gaussian Process model for multi-output regression with missing data
|
||||
|
||||
This is an implementation of Latent Variable Multiple Output Gaussian Processes (LVMOGP) in [Dai et al. 2017]. This model targets at the use case, in which each output dimension is observed at a different set of inputs. The model takes a different data format: the inputs and outputs observations of all the output dimensions are stacked together correspondingly into two matrices. An extra array is used to indicate the index of output dimension for each data point. The output dimensions are indexed using integers from 0 to D-1 assuming there are D output dimensions.
|
||||
This is an implementation of Latent Variable Multiple Output
|
||||
Gaussian Processes (LVMOGP) in [Dai_et_al_2017]_. This model
|
||||
targets at the use case, in which each output dimension is
|
||||
observed at a different set of inputs. The model takes a different
|
||||
data format: the inputs and outputs observations of all the output
|
||||
dimensions are stacked together correspondingly into two
|
||||
matrices. An extra array is used to indicate the index of output
|
||||
dimension for each data point. The output dimensions are indexed
|
||||
using integers from 0 to D-1 assuming there are D output
|
||||
dimensions.
|
||||
|
||||
Zhenwen Dai, Mauricio A. Alvarez and Neil D. Lawrence. Efficient Modeling of Latent Information in Supervised Learning using Gaussian Processes. In NIPS, 2017.
|
||||
.. rubric:: References
|
||||
|
||||
.. [Dai_et_al_2017] Dai, Z.; Alvarez, M.A.; Lawrence, N.D: Efficient Modeling of Latent Information in Supervised Learning using Gaussian Processes. In NIPS, 2017.
|
||||
|
||||
:param X: input observations.
|
||||
:type X: numpy.ndarray
|
||||
|
|
@ -46,6 +56,8 @@ class GPMultioutRegressionMD(SparseGP):
|
|||
:param str init: the choice of initialization: 'GP' or 'rand'. With 'rand', the model is initialized randomly. With 'GP', the model is initialized through a protocol as follows: (1) fits a sparse GP (2) fits a BGPLVM based on the outcome of sparse GP (3) initialize the model based on the outcome of the BGPLVM.
|
||||
:param boolean heter_noise: whether assuming heteroscedastic noise in the model, boolean
|
||||
:param str name: the name of the model
|
||||
|
||||
|
||||
"""
|
||||
def __init__(self, X, Y, indexD, Xr_dim, kernel=None, kernel_row=None, Z=None, Z_row=None, X_row=None, Xvariance_row=None, num_inducing=(10,10), qU_var_r_W_dim=None, qU_var_c_W_dim=None, init='GP', heter_noise=False, name='GPMRMD'):
|
||||
|
||||
|
|
|
|||
|
|
@ -13,13 +13,9 @@ class GPVariationalGaussianApproximation(GP):
|
|||
"""
|
||||
The Variational Gaussian Approximation revisited
|
||||
|
||||
@article{Opper:2009,
|
||||
title = {The Variational Gaussian Approximation Revisited},
|
||||
author = {Opper, Manfred and Archambeau, C{\'e}dric},
|
||||
journal = {Neural Comput.},
|
||||
year = {2009},
|
||||
pages = {786--792},
|
||||
}
|
||||
.. rubric:: References
|
||||
|
||||
.. [opper_archambeau_2009] Opper, M.; Archambeau, C.; The Variational Gaussian Approximation Revisited. Neural Comput. 2009, pages 786-792.
|
||||
"""
|
||||
def __init__(self, X, Y, kernel, likelihood, Y_metadata=None):
|
||||
|
||||
|
|
|
|||
|
|
@ -39,28 +39,30 @@ class GradientChecker(Model):
|
|||
a list of names with the same length is expected.
|
||||
:param args: Arguments passed as f(x, *args, **kwargs) and df(x, *args, **kwargs)
|
||||
|
||||
Examples:
|
||||
---------
|
||||
.. rubric:: Examples
|
||||
|
||||
Initialisation::
|
||||
|
||||
from GPy.models import GradientChecker
|
||||
N, M, Q = 10, 5, 3
|
||||
|
||||
Sinusoid:
|
||||
Sinusoid::
|
||||
|
||||
X = numpy.random.rand(N, Q)
|
||||
grad = GradientChecker(numpy.sin,numpy.cos,X,'x')
|
||||
grad.checkgrad(verbose=1)
|
||||
X = numpy.random.rand(N, Q)
|
||||
grad = GradientChecker(numpy.sin,numpy.cos,X,'x')
|
||||
grad.checkgrad(verbose=1)
|
||||
|
||||
Using GPy:
|
||||
Using GPy::
|
||||
|
||||
X, Z = numpy.random.randn(N,Q), numpy.random.randn(M,Q)
|
||||
kern = GPy.kern.linear(Q, ARD=True) + GPy.kern.rbf(Q, ARD=True)
|
||||
grad = GradientChecker(kern.K,
|
||||
lambda x: 2*kern.dK_dX(numpy.ones((1,1)), x),
|
||||
x0 = X.copy(),
|
||||
names='X')
|
||||
grad.checkgrad(verbose=1)
|
||||
grad.randomize()
|
||||
grad.checkgrad(verbose=1)
|
||||
X, Z = numpy.random.randn(N,Q), numpy.random.randn(M,Q)
|
||||
kern = GPy.kern.linear(Q, ARD=True) + GPy.kern.rbf(Q, ARD=True)
|
||||
grad = GradientChecker(kern.K,
|
||||
lambda x: 2*kern.dK_dX(numpy.ones((1,1)), x),
|
||||
x0 = X.copy(),
|
||||
names='X')
|
||||
grad.checkgrad(verbose=1)
|
||||
grad.randomize()
|
||||
grad.checkgrad(verbose=1)
|
||||
"""
|
||||
super(GradientChecker, self).__init__(name='GradientChecker')
|
||||
if isinstance(x0, (list, tuple)) and names is None:
|
||||
|
|
|
|||
|
|
@ -63,7 +63,6 @@ class MRD(BayesianGPLVMMiniBatch):
|
|||
Ynames=None, normalizer=False, stochastic=False, batchsize=10):
|
||||
|
||||
self.logger = logging.getLogger(self.__class__.__name__)
|
||||
self.input_dim = input_dim
|
||||
self.num_inducing = num_inducing
|
||||
|
||||
if isinstance(Ylist, dict):
|
||||
|
|
@ -87,11 +86,11 @@ class MRD(BayesianGPLVMMiniBatch):
|
|||
self.inference_method = inference_method
|
||||
|
||||
if X is None:
|
||||
X, fracs = self._init_X(initx, Ylist)
|
||||
X, fracs = self._init_X(input_dim, initx, Ylist)
|
||||
else:
|
||||
fracs = [X.var(0)]*len(Ylist)
|
||||
|
||||
Z = self._init_Z(initz, X)
|
||||
Z = self._init_Z(initz, X, input_dim)
|
||||
self.Z = Param('inducing inputs', Z)
|
||||
self.num_inducing = self.Z.shape[0] # ensure M==N if M>N
|
||||
|
||||
|
|
@ -128,7 +127,6 @@ class MRD(BayesianGPLVMMiniBatch):
|
|||
self.unlink_parameter(self.likelihood)
|
||||
self.unlink_parameter(self.kern)
|
||||
|
||||
self.num_data = Ylist[0].shape[0]
|
||||
if isinstance(batchsize, int):
|
||||
batchsize = itertools.repeat(batchsize)
|
||||
|
||||
|
|
@ -187,32 +185,32 @@ class MRD(BayesianGPLVMMiniBatch):
|
|||
def log_likelihood(self):
|
||||
return self._log_marginal_likelihood
|
||||
|
||||
def _init_X(self, init='PCA', Ylist=None):
|
||||
def _init_X(self, input_dim, init='PCA', Ylist=None):
|
||||
if Ylist is None:
|
||||
Ylist = self.Ylist
|
||||
if init in "PCA_concat":
|
||||
X, fracs = initialize_latent('PCA', self.input_dim, np.hstack(Ylist))
|
||||
X, fracs = initialize_latent('PCA', input_dim, np.hstack(Ylist))
|
||||
fracs = [fracs]*len(Ylist)
|
||||
elif init in "PCA_single":
|
||||
X = np.zeros((Ylist[0].shape[0], self.input_dim))
|
||||
fracs = np.empty((len(Ylist), self.input_dim))
|
||||
for qs, Y in zip(np.array_split(np.arange(self.input_dim), len(Ylist)), Ylist):
|
||||
X = np.zeros((Ylist[0].shape[0], input_dim))
|
||||
fracs = np.empty((len(Ylist), input_dim))
|
||||
for qs, Y in zip(np.array_split(np.arange(input_dim), len(Ylist)), Ylist):
|
||||
x, frcs = initialize_latent('PCA', len(qs), Y)
|
||||
X[:, qs] = x
|
||||
fracs[:, qs] = frcs
|
||||
else: # init == 'random':
|
||||
X = np.random.randn(Ylist[0].shape[0], self.input_dim)
|
||||
X = np.random.randn(Ylist[0].shape[0], input_dim)
|
||||
fracs = X.var(0)
|
||||
fracs = [fracs]*len(Ylist)
|
||||
X -= X.mean()
|
||||
X /= X.std()
|
||||
return X, fracs
|
||||
|
||||
def _init_Z(self, init, X):
|
||||
def _init_Z(self, init, X, input_dim):
|
||||
if init in "permute":
|
||||
Z = np.random.permutation(X.copy())[:self.num_inducing]
|
||||
elif init in "random":
|
||||
Z = np.random.randn(self.num_inducing, self.input_dim) * X.var()
|
||||
Z = np.random.randn(self.num_inducing, input_dim) * X.var()
|
||||
return Z
|
||||
|
||||
def predict(self, Xnew, full_cov=False, Y_metadata=None, kern=None, Yindex=0):
|
||||
|
|
@ -350,5 +348,3 @@ class MRD(BayesianGPLVMMiniBatch):
|
|||
print('# Private dimensions model ' + str(i) + ':' + str(privateDims[i]))
|
||||
|
||||
return sharedDims, privateDims
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -46,8 +46,8 @@ class SparseGPClassification(SparseGP):
|
|||
if inference_method is None:
|
||||
inference_method = EPDTC()
|
||||
|
||||
SparseGP.__init__(self, X, Y, Z, kernel, likelihood, mean_function=mean_function, inference_method=inference_method,
|
||||
normalizer=normalizer, name='SparseGPClassification', Y_metadata=Y_metadata)
|
||||
super(SparseGPClassification, self).__init__(X, Y, Z, kernel, likelihood, mean_function=mean_function, inference_method=inference_method,
|
||||
normalizer=normalizer, name='SparseGPClassification', Y_metadata=Y_metadata)
|
||||
|
||||
@staticmethod
|
||||
def from_sparse_gp(sparse_gp):
|
||||
|
|
@ -136,9 +136,9 @@ class SparseGPClassificationUncertainInput(SparseGP):
|
|||
|
||||
X = NormalPosterior(X, X_variance)
|
||||
|
||||
SparseGP.__init__(self, X, Y, Z, kernel, likelihood,
|
||||
inference_method=EPDTC(),
|
||||
name='SparseGPClassification', Y_metadata=Y_metadata, normalizer=normalizer)
|
||||
super(SparseGPClassificationUncertainInput, self).__init__(X, Y, Z, kernel, likelihood,
|
||||
inference_method=EPDTC(), name='SparseGPClassification',
|
||||
Y_metadata=Y_metadata, normalizer=normalizer)
|
||||
|
||||
def parameters_changed(self):
|
||||
#Compute the psi statistics for N once, but don't sum out N in psi2
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ class SparseGPCoregionalizedRegression(SparseGP):
|
|||
if kernel is None:
|
||||
kernel = kern.RBF(X.shape[1]-1)
|
||||
|
||||
kernel = util.multioutput.ICM(input_dim=X.shape[1]-1, num_outputs=Ny, kernel=kernel, W_rank=1,name=kernel_name)
|
||||
kernel = util.multioutput.ICM(input_dim=X.shape[1]-1, num_outputs=Ny, kernel=kernel, W_rank=W_rank, name=kernel_name)
|
||||
|
||||
#Likelihood
|
||||
likelihood = util.multioutput.build_likelihood(Y_list,self.output_index,likelihoods_list)
|
||||
|
|
|
|||
|
|
@ -43,6 +43,10 @@ class SparseGPMiniBatch(SparseGP):
|
|||
missing_data=False, stochastic=False, batchsize=1):
|
||||
self._update_stochastics = False
|
||||
|
||||
# FIXME(?): Half of this function seems to be copy-pasted from
|
||||
# SparseGP.__init, any particular reason why SparseGP.__init
|
||||
# is not called (instead of calling GP.__init__ directly)?
|
||||
|
||||
# pick a sensible inference method
|
||||
if inference_method is None:
|
||||
if isinstance(likelihood, likelihoods.Gaussian):
|
||||
|
|
@ -56,7 +60,8 @@ class SparseGPMiniBatch(SparseGP):
|
|||
self.Z = Param('inducing inputs', Z)
|
||||
self.num_inducing = Z.shape[0]
|
||||
|
||||
GP.__init__(self, X, Y, kernel, likelihood, inference_method=inference_method, name=name, Y_metadata=Y_metadata, normalizer=normalizer)
|
||||
# Skip SparseGP.__init (see remark above)
|
||||
super(SparseGP, self).__init__(X, Y, kernel, likelihood, inference_method=inference_method, name=name, Y_metadata=Y_metadata, normalizer=normalizer)
|
||||
self.missing_data = missing_data
|
||||
|
||||
if stochastic and missing_data:
|
||||
|
|
|
|||
|
|
@ -55,7 +55,7 @@ class SparseGPRegression(SparseGP_MPI):
|
|||
else:
|
||||
infr = VarDTC()
|
||||
|
||||
SparseGP_MPI.__init__(self, X, Y, Z, kernel, likelihood, mean_function=mean_function,
|
||||
super(SparseGPRegression, self).__init__(X, Y, Z, kernel, likelihood, mean_function=mean_function,
|
||||
inference_method=infr, normalizer=normalizer, mpi_comm=mpi_comm, name=name)
|
||||
|
||||
def parameters_changed(self):
|
||||
|
|
|
|||
|
|
@ -10,10 +10,18 @@ from ..inference.latent_function_inference.vardtc_md import VarDTC_MD
|
|||
from GPy.core.parameterization.variational import NormalPosterior
|
||||
|
||||
class SparseGPRegressionMD(SparseGP_MPI):
|
||||
"""
|
||||
Sparse Gaussian Process Regression with Missing Data
|
||||
"""Sparse Gaussian Process Regression with Missing Data
|
||||
|
||||
This model targets at the use case, in which there are multiple output dimensions (different dimensions are assumed to be independent following the same GP prior) and each output dimension is observed at a different set of inputs. The model takes a different data format: the inputs and outputs observations of all the output dimensions are stacked together correspondingly into two matrices. An extra array is used to indicate the index of output dimension for each data point. The output dimensions are indexed using integers from 0 to D-1 assuming there are D output dimensions.
|
||||
This model targets at the use case, in which there are multiple
|
||||
output dimensions (different dimensions are assumed to be
|
||||
independent following the same GP prior) and each output dimension
|
||||
is observed at a different set of inputs. The model takes a
|
||||
different data format: the inputs and outputs observations of all
|
||||
the output dimensions are stacked together correspondingly into
|
||||
two matrices. An extra array is used to indicate the index of
|
||||
output dimension for each data point. The output dimensions are
|
||||
indexed using integers from 0 to D-1 assuming there are D output
|
||||
dimensions.
|
||||
|
||||
:param X: input observations.
|
||||
:type X: numpy.ndarray
|
||||
|
|
@ -29,6 +37,7 @@ class SparseGPRegressionMD(SparseGP_MPI):
|
|||
:type num_inducing: (int, int)
|
||||
:param boolean individual_Y_noise: whether individual output dimensions have their own noise variance or not, boolean
|
||||
:param str name: the name of the model
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, X, Y, indexD, kernel=None, Z=None, num_inducing=10, normalizer=None, mpi_comm=None, individual_Y_noise=False, name='sparse_gp'):
|
||||
|
|
@ -58,7 +67,7 @@ class SparseGPRegressionMD(SparseGP_MPI):
|
|||
|
||||
infr = VarDTC_MD()
|
||||
|
||||
SparseGP_MPI.__init__(self, X, Y, Z, kernel, likelihood, inference_method=infr, normalizer=normalizer, mpi_comm=mpi_comm, name=name)
|
||||
super(SparseGPRegressionMD, self).__init__(X, Y, Z, kernel, likelihood, inference_method=infr, normalizer=normalizer, mpi_comm=mpi_comm, name=name)
|
||||
self.output_dim = output_dim
|
||||
|
||||
def parameters_changed(self):
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ class SparseGPLVM(SparseGPRegression):
|
|||
from ..util.initialization import initialize_latent
|
||||
X, fracs = initialize_latent(init, input_dim, Y)
|
||||
X = Param('latent space', X)
|
||||
SparseGPRegression.__init__(self, X, Y, kernel=kernel, num_inducing=num_inducing)
|
||||
super(SparseGPLVM, self).__init__(X, Y, kernel=kernel, num_inducing=num_inducing)
|
||||
self.link_parameter(self.X, 0)
|
||||
|
||||
def parameters_changed(self):
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -1,3 +1,19 @@
|
|||
"""Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
:py:class:`GPy.plotting` effectively extends models based on
|
||||
:py:class:`GPy.core.gp.GP` (and other classes) by adding methods to
|
||||
plot useful charts. 'matplotlib', 'plotly' (online) and 'plotly'
|
||||
(offline) are supported. The methods in :py:class:`GPy.plotting` (and
|
||||
child classes :py:class:`GPy.plotting.gpy_plot` and
|
||||
:py:class:`GPy.plotting.matplot_dep`) are not intended to be called
|
||||
directly, but rather are 'injected' into other classes (notably
|
||||
:py:class:`GPy.core.gp.GP`). Documentation describing plots is best
|
||||
found associated with the model being plotted
|
||||
e.g. :py:class:`GPy.core.gp.GP.plot_confidence`.
|
||||
|
||||
"""
|
||||
|
||||
# Copyright (c) 2014, GPy authors (see AUTHORS.txt).
|
||||
# Licensed under the BSD 3-clause license (see LICENSE.txt)
|
||||
current_lib = [None]
|
||||
|
|
|
|||
|
|
@ -89,7 +89,8 @@ def plot_latent_scatter(self, labels=None,
|
|||
Plot a scatter plot of the latent space.
|
||||
|
||||
:param array-like labels: a label for each data point (row) of the inputs
|
||||
:param (int, int) which_indices: which input dimensions to plot against each other
|
||||
:param which_indices: which input dimensions to plot against each other
|
||||
:type which_indices: (int, int)
|
||||
:param bool legend: whether to plot the legend on the figure
|
||||
:param plot_limits: the plot limits for the plot
|
||||
:type plot_limits: (xmin, xmax, ymin, ymax) or ((xmin, xmax), (ymin, ymax))
|
||||
|
|
@ -174,7 +175,8 @@ def plot_magnification(self, labels=None, which_indices=None,
|
|||
density of the GP as a gray scale.
|
||||
|
||||
:param array-like labels: a label for each data point (row) of the inputs
|
||||
:param (int, int) which_indices: which input dimensions to plot against each other
|
||||
:param which_indices: which input dimensions to plot against each other
|
||||
:type which_indices: (int, int)
|
||||
:param int resolution: the resolution at which we predict the magnification factor
|
||||
:param str marker: markers to use - cycle if more labels then markers are given
|
||||
:param bool legend: whether to plot the legend on the figure
|
||||
|
|
@ -183,7 +185,8 @@ def plot_magnification(self, labels=None, which_indices=None,
|
|||
:param bool updates: if possible, make interactive updates using the specific library you are using
|
||||
:param bool mean: use the mean of the Wishart embedding for the magnification factor
|
||||
:param bool covariance: use the covariance of the Wishart embedding for the magnification factor
|
||||
:param :py:class:`~GPy.kern.Kern` kern: the kernel to use for prediction
|
||||
:param kern: the kernel to use for prediction
|
||||
:type kern: :py:class:`~GPy.kern.Kern`
|
||||
:param int num_samples: the number of samples to plot maximally. We do a stratified subsample from the labels, if the number of samples (in X) is higher then num_samples.
|
||||
:param imshow_kwargs: the kwargs for the imshow (magnification factor)
|
||||
:param kwargs: the kwargs for the scatter plots
|
||||
|
|
@ -248,13 +251,15 @@ def plot_latent(self, labels=None, which_indices=None,
|
|||
scatter plot of the input dimemsions selected by which_indices.
|
||||
|
||||
:param array-like labels: a label for each data point (row) of the inputs
|
||||
:param (int, int) which_indices: which input dimensions to plot against each other
|
||||
:param which_indices: which input dimensions to plot against each other
|
||||
:type which_indices: (int, int)
|
||||
:param int resolution: the resolution at which we predict the magnification factor
|
||||
:param bool legend: whether to plot the legend on the figure
|
||||
:param plot_limits: the plot limits for the plot
|
||||
:type plot_limits: (xmin, xmax, ymin, ymax) or ((xmin, xmax), (ymin, ymax))
|
||||
:param bool updates: if possible, make interactive updates using the specific library you are using
|
||||
:param :py:class:`~GPy.kern.Kern` kern: the kernel to use for prediction
|
||||
:param kern: the kernel to use for prediction
|
||||
:type kern: :py:class:`~GPy.kern.Kern`
|
||||
:param str marker: markers to use - cycle if more labels then markers are given
|
||||
:param int num_samples: the number of samples to plot maximally. We do a stratified subsample from the labels, if the number of samples (in X) is higher then num_samples.
|
||||
:param imshow_kwargs: the kwargs for the imshow (magnification factor)
|
||||
|
|
@ -316,13 +321,15 @@ def plot_steepest_gradient_map(self, output_labels=None, data_labels=None, which
|
|||
scatter plot of the input dimemsions selected by which_indices.
|
||||
|
||||
:param array-like labels: a label for each data point (row) of the inputs
|
||||
:param (int, int) which_indices: which input dimensions to plot against each other
|
||||
:param which_indices: which input dimensions to plot against each other
|
||||
:type which_indices: (int, int)
|
||||
:param int resolution: the resolution at which we predict the magnification factor
|
||||
:param bool legend: whether to plot the legend on the figure, if int plot legend columns on legend
|
||||
:param plot_limits: the plot limits for the plot
|
||||
:type plot_limits: (xmin, xmax, ymin, ymax) or ((xmin, xmax), (ymin, ymax))
|
||||
:param bool updates: if possible, make interactive updates using the specific library you are using
|
||||
:param :py:class:`~GPy.kern.Kern` kern: the kernel to use for prediction
|
||||
:param kern: the kernel to use for prediction
|
||||
:type kern: :py:class:`~GPy.kern.Kern`
|
||||
:param str marker: markers to use - cycle if more labels then markers are given
|
||||
:param int num_samples: the number of samples to plot maximally. We do a stratified subsample from the labels, if the number of samples (in X) is higher then num_samples.
|
||||
:param imshow_kwargs: the kwargs for the imshow (magnification factor)
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ class vpython_show(data_show):
|
|||
"""
|
||||
|
||||
def __init__(self, vals, scene=None):
|
||||
data_show.__init__(self, vals)
|
||||
super(vpython_show, self).__init__(vals)
|
||||
# If no axes are defined, create some.
|
||||
|
||||
if scene==None:
|
||||
|
|
@ -54,7 +54,7 @@ class matplotlib_show(data_show):
|
|||
the matplotlib_show class is a base class for all visualization methods that use matplotlib. It is initialized with an axis. If the axis is set to None it creates a figure window.
|
||||
"""
|
||||
def __init__(self, vals, axes=None):
|
||||
data_show.__init__(self, vals)
|
||||
super(matplotlib_show, self).__init__(vals)
|
||||
# If no axes are defined, create some.
|
||||
|
||||
if axes==None:
|
||||
|
|
@ -72,7 +72,7 @@ class vector_show(matplotlib_show):
|
|||
vector elements alongside their indices.
|
||||
"""
|
||||
def __init__(self, vals, axes=None):
|
||||
matplotlib_show.__init__(self, vals, axes)
|
||||
super(vector_show, self).__init__(vals, axes)
|
||||
#assert vals.ndim == 2, "Please give a vector in [n x 1] to plot"
|
||||
#assert vals.shape[1] == 1, "only showing a vector in one dimension"
|
||||
self.size = vals.size
|
||||
|
|
@ -102,7 +102,7 @@ class lvm(matplotlib_show):
|
|||
vals = model.X.values
|
||||
if len(vals.shape)==1:
|
||||
vals = vals[None,:]
|
||||
matplotlib_show.__init__(self, vals, axes=latent_axes)
|
||||
super(lvm, self).__init__(vals, axes=latent_axes)
|
||||
|
||||
if isinstance(latent_axes,mpl.axes.Axes):
|
||||
self.cid = latent_axes.figure.canvas.mpl_connect('button_press_event', self.on_click)
|
||||
|
|
@ -198,10 +198,10 @@ class lvm_subplots(lvm):
|
|||
if i == self.nplots-1:
|
||||
if self.nplots*2!=Model.input_dim:
|
||||
latent_index = [i*2, i*2]
|
||||
lvm.__init__(self, self.latent_vals, Model, data_visualize, axis, sense_axes, latent_index=latent_index)
|
||||
super(lvm_subplots, self).__init__(self.latent_vals, Model, data_visualize, axis, sense_axes, latent_index=latent_index)
|
||||
else:
|
||||
latent_index = [i*2, i*2+1]
|
||||
lvm.__init__(self, self.latent_vals, Model, data_visualize, axis, latent_index=latent_index)
|
||||
super(lvm_subplots, self).__init__(self.latent_vals, Model, data_visualize, axis, latent_index=latent_index)
|
||||
|
||||
|
||||
|
||||
|
|
@ -223,7 +223,7 @@ class lvm_dimselect(lvm):
|
|||
else:
|
||||
self.sense_axes = sense_axes
|
||||
self.labels = labels
|
||||
lvm.__init__(self,vals,model,data_visualize,latent_axes,sense_axes,latent_index)
|
||||
super(lvm_dimselect, self).__init__(vals,model,data_visualize,latent_axes,sense_axes,latent_index)
|
||||
self.show_sensitivities()
|
||||
print(self.latent_values)
|
||||
print("use left and right mouse buttons to select dimensions")
|
||||
|
|
@ -286,7 +286,7 @@ class image_show(matplotlib_show):
|
|||
:type cmap: matplotlib.cm"""
|
||||
|
||||
def __init__(self, vals, axes=None, dimensions=(16,16), transpose=False, order='C', invert=False, scale=False, palette=[], preset_mean=0., preset_std=1., select_image=0, cmap=None):
|
||||
matplotlib_show.__init__(self, vals, axes)
|
||||
super(image_show, self).__init__(vals, axes)
|
||||
self.dimensions = dimensions
|
||||
self.transpose = transpose
|
||||
self.order = order
|
||||
|
|
@ -352,7 +352,7 @@ class mocap_data_show_vpython(vpython_show):
|
|||
"""Base class for visualizing motion capture data using visual module."""
|
||||
|
||||
def __init__(self, vals, scene=None, connect=None, radius=0.1):
|
||||
vpython_show.__init__(self, vals, scene)
|
||||
super(mocap_data_show_vpython, self).__init__(vals, scene)
|
||||
self.radius = radius
|
||||
self.connect = connect
|
||||
self.process_values()
|
||||
|
|
@ -412,7 +412,7 @@ class mocap_data_show(matplotlib_show):
|
|||
if axes==None:
|
||||
fig = plt.figure()
|
||||
axes = fig.add_subplot(111, projection='3d', aspect='equal')
|
||||
matplotlib_show.__init__(self, vals, axes)
|
||||
super(mocap_data_show, self).__init__(vals, axes)
|
||||
|
||||
self.color = color
|
||||
self.connect = connect
|
||||
|
|
@ -496,7 +496,7 @@ class stick_show(mocap_data_show):
|
|||
def __init__(self, vals, connect=None, axes=None):
|
||||
if len(vals.shape)==1:
|
||||
vals = vals[None,:]
|
||||
mocap_data_show.__init__(self, vals, axes=axes, connect=connect)
|
||||
super(stick_show, self).__init__(vals, axes=axes, connect=connect)
|
||||
|
||||
def process_values(self):
|
||||
self.vals = self.vals.reshape((3, self.vals.shape[1]/3)).T
|
||||
|
|
@ -515,7 +515,7 @@ class skeleton_show(mocap_data_show):
|
|||
self.skel = skel
|
||||
self.padding = padding
|
||||
connect = skel.connection_matrix()
|
||||
mocap_data_show.__init__(self, vals, axes=axes, connect=connect, color=color)
|
||||
super(skeleton_show, self).__init__(vals, axes=axes, connect=connect, color=color)
|
||||
def process_values(self):
|
||||
"""Takes a set of angles and converts them to the x,y,z coordinates in the internal prepresentation of the class, ready for plotting.
|
||||
|
||||
|
|
|
|||
|
|
@ -28,11 +28,14 @@ class Test(unittest.TestCase):
|
|||
Xnew = NormalPosterior(m.X.mean[:10].copy(), m.X.variance[:10].copy())
|
||||
m.set_XY(Xnew, m.Y[:10].copy())
|
||||
assert(m.checkgrad())
|
||||
|
||||
assert(m.num_data == m.X.shape[0])
|
||||
assert(m.input_dim == m.X.shape[1])
|
||||
|
||||
m.set_XY(X, self.Y)
|
||||
mu2, var2 = m.predict(m.X)
|
||||
np.testing.assert_allclose(mu, mu2)
|
||||
np.testing.assert_allclose(var, var2)
|
||||
|
||||
|
||||
def test_setxy_gplvm(self):
|
||||
k = GPy.kern.RBF(1)
|
||||
|
|
@ -42,6 +45,10 @@ class Test(unittest.TestCase):
|
|||
Xnew = X[:10].copy()
|
||||
m.set_XY(Xnew, m.Y[:10].copy())
|
||||
assert(m.checkgrad())
|
||||
|
||||
assert(m.num_data == m.X.shape[0])
|
||||
assert(m.input_dim == m.X.shape[1])
|
||||
|
||||
m.set_XY(X, self.Y)
|
||||
mu2, var2 = m.predict(m.X)
|
||||
np.testing.assert_allclose(mu, mu2)
|
||||
|
|
@ -54,6 +61,10 @@ class Test(unittest.TestCase):
|
|||
X = m.X.copy()
|
||||
m.set_XY(m.X[:10], m.Y[:10])
|
||||
assert(m.checkgrad())
|
||||
|
||||
assert(m.num_data == m.X.shape[0])
|
||||
assert(m.input_dim == m.X.shape[1])
|
||||
|
||||
m.set_XY(X, self.Y)
|
||||
mu2, var2 = m.predict(m.X)
|
||||
np.testing.assert_allclose(mu, mu2)
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ class Kern_check_dK_dtheta(Kern_check_model):
|
|||
respect to parameters.
|
||||
"""
|
||||
def __init__(self, kernel=None, dL_dK=None, X=None, X2=None):
|
||||
Kern_check_model.__init__(self,kernel=kernel,dL_dK=dL_dK, X=X, X2=X2)
|
||||
super(Kern_check_dK_dtheta, self).__init__(kernel=kernel,dL_dK=dL_dK, X=X, X2=X2)
|
||||
self.link_parameter(self.kernel)
|
||||
|
||||
def parameters_changed(self):
|
||||
|
|
@ -74,7 +74,7 @@ class Kern_check_dKdiag_dtheta(Kern_check_model):
|
|||
kernel with respect to the parameters.
|
||||
"""
|
||||
def __init__(self, kernel=None, dL_dK=None, X=None):
|
||||
Kern_check_model.__init__(self,kernel=kernel,dL_dK=dL_dK, X=X, X2=None)
|
||||
super(Kern_check_dKdiag_dtheta, self).__init__(kernel=kernel,dL_dK=dL_dK, X=X, X2=None)
|
||||
self.link_parameter(self.kernel)
|
||||
|
||||
def log_likelihood(self):
|
||||
|
|
@ -86,7 +86,7 @@ class Kern_check_dKdiag_dtheta(Kern_check_model):
|
|||
class Kern_check_dK_dX(Kern_check_model):
|
||||
"""This class allows gradient checks for the gradient of a kernel with respect to X. """
|
||||
def __init__(self, kernel=None, dL_dK=None, X=None, X2=None):
|
||||
Kern_check_model.__init__(self,kernel=kernel,dL_dK=dL_dK, X=X, X2=X2)
|
||||
super(Kern_check_dK_dX, self).__init__(kernel=kernel,dL_dK=dL_dK, X=X, X2=X2)
|
||||
self.X = Param('X',X)
|
||||
self.link_parameter(self.X)
|
||||
|
||||
|
|
@ -96,7 +96,7 @@ class Kern_check_dK_dX(Kern_check_model):
|
|||
class Kern_check_dKdiag_dX(Kern_check_dK_dX):
|
||||
"""This class allows gradient checks for the gradient of a kernel diagonal with respect to X. """
|
||||
def __init__(self, kernel=None, dL_dK=None, X=None, X2=None):
|
||||
Kern_check_dK_dX.__init__(self,kernel=kernel,dL_dK=dL_dK, X=X, X2=None)
|
||||
super(Kern_check_dKdiag_dX, self).__init__(kernel=kernel,dL_dK=dL_dK, X=X, X2=None)
|
||||
|
||||
def log_likelihood(self):
|
||||
return (np.diag(self.dL_dK)*self.kernel.Kdiag(self.X)).sum()
|
||||
|
|
@ -107,7 +107,7 @@ class Kern_check_dKdiag_dX(Kern_check_dK_dX):
|
|||
class Kern_check_d2K_dXdX(Kern_check_model):
|
||||
"""This class allows gradient checks for the secondderivative of a kernel with respect to X. """
|
||||
def __init__(self, kernel=None, dL_dK=None, X=None, X2=None):
|
||||
Kern_check_model.__init__(self,kernel=kernel,dL_dK=dL_dK, X=X, X2=X2)
|
||||
super(Kern_check_d2K_dXdX, self).__init__(kernel=kernel,dL_dK=dL_dK, X=X, X2=X2)
|
||||
self.X = Param('X',X.copy())
|
||||
self.link_parameter(self.X)
|
||||
self.Xc = X.copy()
|
||||
|
|
@ -129,7 +129,7 @@ class Kern_check_d2K_dXdX(Kern_check_model):
|
|||
class Kern_check_d2Kdiag_dXdX(Kern_check_model):
|
||||
"""This class allows gradient checks for the second derivative of a kernel with respect to X. """
|
||||
def __init__(self, kernel=None, dL_dK=None, X=None):
|
||||
Kern_check_model.__init__(self,kernel=kernel,dL_dK=dL_dK, X=X)
|
||||
super(Kern_check_d2Kdiag_dXdX, self).__init__(kernel=kernel,dL_dK=dL_dK, X=X)
|
||||
self.X = Param('X',X)
|
||||
self.link_parameter(self.X)
|
||||
self.Xc = X.copy()
|
||||
|
|
|
|||
|
|
@ -1168,7 +1168,7 @@ class GradientTests(np.testing.TestCase):
|
|||
Y = np.array([[1], [2]])
|
||||
m = GPy.models.GPRegression(X1, Y, kernel=k)
|
||||
|
||||
result = m.posterior_covariance_between_points(X1, X2)
|
||||
result = m._raw_posterior_covariance_between_points(X1, X2)
|
||||
expected = np.array([[0.4, 2.2], [1.0, 1.0]]) / 3.0
|
||||
|
||||
self.assertTrue(np.allclose(result, expected))
|
||||
|
|
@ -1179,7 +1179,7 @@ class GradientTests(np.testing.TestCase):
|
|||
m = _create_missing_data_model(k, Q)
|
||||
|
||||
with self.assertRaises(RuntimeError):
|
||||
m.posterior_covariance_between_points(np.array([[1], [2]]), np.array([[3], [4]]))
|
||||
m._raw_posterior_covariance_between_points(np.array([[1], [2]]), np.array([[3], [4]]))
|
||||
|
||||
def test_multioutput_model_with_derivative_observations(self):
|
||||
f = lambda x: np.sin(x)+0.1*(x-2.)**2-0.005*x**3
|
||||
|
|
@ -1242,6 +1242,45 @@ class GradientTests(np.testing.TestCase):
|
|||
|
||||
self.assertTrue(m.checkgrad())
|
||||
|
||||
|
||||
def test_predictive_gradients_with_normalizer(self):
|
||||
"""
|
||||
Check that model.predictive_gradients returns the gradients of
|
||||
model.predict when normalizer=True
|
||||
"""
|
||||
N, M, Q = 10, 15, 3
|
||||
X = np.random.rand(M,Q)
|
||||
Y = np.random.rand(M,1)
|
||||
x = np.random.rand(N, Q)
|
||||
model = GPy.models.GPRegression(X=X, Y=Y, normalizer=True)
|
||||
from GPy.models import GradientChecker
|
||||
gm = GradientChecker(lambda x: model.predict(x)[0],
|
||||
lambda x: model.predictive_gradients(x)[0],
|
||||
x, 'x')
|
||||
gc = GradientChecker(lambda x: model.predict(x)[1],
|
||||
lambda x: model.predictive_gradients(x)[1],
|
||||
x, 'x')
|
||||
assert(gm.checkgrad())
|
||||
assert(gc.checkgrad())
|
||||
|
||||
|
||||
def test_posterior_covariance_between_points_with_normalizer(self):
|
||||
"""
|
||||
Check that model.posterior_covariance_between_points returns
|
||||
the covariance from model.predict when normalizer=True
|
||||
"""
|
||||
np.random.seed(3)
|
||||
N, M, Q = 10, 15, 3
|
||||
X = np.random.rand(M,Q)
|
||||
Y = np.random.rand(M,1)
|
||||
x = np.random.rand(2, Q)
|
||||
model = GPy.models.GPRegression(X=X, Y=Y, normalizer=True)
|
||||
|
||||
c1 = model.posterior_covariance_between_points(x,x)
|
||||
c2 = model.predict(x, full_cov=True)[1]
|
||||
np.testing.assert_allclose(c1,c2)
|
||||
|
||||
|
||||
def _create_missing_data_model(kernel, Q):
|
||||
D1, D2, D3, N, num_inducing = 13, 5, 8, 400, 3
|
||||
_, _, Ylist = GPy.examples.dimensionality_reduction._simulate_matern(D1, D2, D3, N, num_inducing, False)
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ from nose import SkipTest
|
|||
|
||||
try:
|
||||
import matplotlib
|
||||
matplotlib.use('agg', warn=False)
|
||||
matplotlib.use('agg')
|
||||
except ImportError:
|
||||
# matplotlib not installed
|
||||
from nose import SkipTest
|
||||
|
|
@ -87,7 +87,7 @@ def _image_directories():
|
|||
result_dir = os.path.join(basedir, 'testresult','.')
|
||||
baseline_dir = os.path.join(basedir, 'baseline','.')
|
||||
if not os.path.exists(result_dir):
|
||||
cbook.mkdirs(result_dir)
|
||||
os.makedirs(result_dir)
|
||||
return baseline_dir, result_dir
|
||||
|
||||
baseline_dir, result_dir = _image_directories()
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ class TestModel(GPy.core.Model):
|
|||
A simple GPy model with one parameter.
|
||||
"""
|
||||
def __init__(self, theta=1.):
|
||||
GPy.core.Model.__init__(self, 'test_model')
|
||||
super(TestModel, self).__init__('test_model')
|
||||
theta = GPy.core.Param('theta', theta)
|
||||
self.link_parameter(theta)
|
||||
|
||||
|
|
|
|||
|
|
@ -26,14 +26,15 @@ class Test(unittest.TestCase):
|
|||
k7 = GPy.kern.Matern32(2, variance=1.0, lengthscale=[1.0,3.0], ARD=True, active_dims = [1,1])
|
||||
k8 = GPy.kern.Matern52(2, variance=2.0, lengthscale=[2.0,1.0], ARD=True, active_dims = [1,0])
|
||||
k9 = GPy.kern.ExpQuad(2, variance=3.0, lengthscale=[1.0,2.0], ARD=True, active_dims = [0,1])
|
||||
k10 = k1 + k1.copy() + k2 + k3 + k4 + k5 + k6
|
||||
k11 = k1 * k2 * k2.copy() * k3 * k4 * k5
|
||||
k12 = (k1 + k2) * (k3 + k4 + k5)
|
||||
k13 = ((k1 + k2) * k3) + k4 + k5 * k7
|
||||
k14 = ((k1 + k2) * k3) + k4 * k5 + k8
|
||||
k15 = ((k1 * k2) * k3) + k4 * k5 + k8 + k9
|
||||
k10 = GPy.kern.OU(2, variance=2.0, lengthscale=[2.0, 1.0], ARD=True, active_dims=[1, 0])
|
||||
k11 = k1 + k1.copy() + k2 + k3 + k4 + k5 + k6
|
||||
k12 = k1 * k2 * k2.copy() * k3 * k4 * k5
|
||||
k13 = (k1 + k2) * (k3 + k4 + k5)
|
||||
k14 = ((k1 + k2) * k3) + k4 + k5 * k7
|
||||
k15 = ((k1 + k2) * k3) + k4 * k5 + k8 * k10
|
||||
k16 = ((k1 * k2) * k3) + k4 * k5 + k8 + k9
|
||||
|
||||
k_list = [k1,k2,k3,k4,k5,k6,k7,k8,k9,k10,k11,k12,k13,k14,k15]
|
||||
k_list = [k1,k2,k3,k4,k5,k6,k7,k8,k9,k10,k11,k12,k13,k14,k15,k16]
|
||||
|
||||
for kk in k_list:
|
||||
kk_dict = kk.to_dict()
|
||||
|
|
|
|||
|
|
@ -1,6 +1,12 @@
|
|||
# Copyright (c) 2012, GPy authors (see AUTHORS.txt).
|
||||
# Licensed under the BSD 3-clause license (see LICENSE.txt)
|
||||
"""
|
||||
Introduction
|
||||
^^^^^^^^^^^^
|
||||
|
||||
A variety of utility functions including matrix operations and quick access to test datasets.
|
||||
|
||||
"""
|
||||
|
||||
from . import linalg
|
||||
from . import misc
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -500,7 +500,15 @@ def drosophila_knirps(data_set='drosophila_protein'):
|
|||
|
||||
# This will be for downloading google trends data.
|
||||
def google_trends(query_terms=['big data', 'machine learning', 'data science'], data_set='google_trends', refresh_data=False):
|
||||
"""Data downloaded from Google trends for given query terms. Warning, if you use this function multiple times in a row you get blocked due to terms of service violations. The function will cache the result of your query, if you wish to refresh an old query set refresh_data to True. The function is inspired by this notebook: http://nbviewer.ipython.org/github/sahuguet/notebooks/blob/master/GoogleTrends%20meet%20Notebook.ipynb"""
|
||||
"""Data downloaded from Google trends for given query terms.
|
||||
|
||||
Warning, if you use this function multiple times in a row you get
|
||||
blocked due to terms of service violations. The function will cache
|
||||
the result of your query, if you wish to refresh an old query set
|
||||
refresh_data to True.
|
||||
|
||||
The function is inspired by this notebook:
|
||||
http://nbviewer.ipython.org/github/sahuguet/notebooks/blob/master/GoogleTrends%20meet%20Notebook.ipynb"""
|
||||
query_terms.sort()
|
||||
import pandas
|
||||
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -163,7 +163,7 @@ def rotation_matrix(xangle, yangle, zangle, order='zxy', degrees=False):
|
|||
# Motion capture data routines.
|
||||
class skeleton(tree):
|
||||
def __init__(self):
|
||||
tree.__init__(self)
|
||||
super(skeleton, self).__init__()
|
||||
|
||||
def connection_matrix(self):
|
||||
connection = np.zeros((len(self.vertices), len(self.vertices)), dtype=bool)
|
||||
|
|
@ -197,13 +197,13 @@ class skeleton(tree):
|
|||
|
||||
# class bvh_skeleton(skeleton):
|
||||
# def __init__(self):
|
||||
# skeleton.__init__(self)
|
||||
# super(bvh_skeleton, self).__init__()
|
||||
|
||||
# def to_xyz(self, channels):
|
||||
|
||||
class acclaim_skeleton(skeleton):
|
||||
def __init__(self, file_name=None):
|
||||
skeleton.__init__(self)
|
||||
super(acclaim_skeleton, self).__init__()
|
||||
self.documentation = []
|
||||
self.angle = 'deg'
|
||||
self.length = 1.0
|
||||
|
|
|
|||
36
README.md
36
README.md
|
|
@ -5,11 +5,17 @@ The Gaussian processes framework in Python.
|
|||
* GPy [homepage](http://sheffieldml.github.io/GPy/)
|
||||
* Tutorial [notebooks](http://nbviewer.ipython.org/github/SheffieldML/notebook/blob/master/GPy/index.ipynb)
|
||||
* User [mailing-list](https://lists.shef.ac.uk/sympa/subscribe/gpy-users)
|
||||
* Developer [documentation](http://gpy.readthedocs.io/)
|
||||
* Developer [documentation](http://gpy.readthedocs.io/) [documentation (devel branch)](https://gpy.readthedocs.io/en/devel/)
|
||||
* Travis-CI [unit-tests](https://travis-ci.org/SheffieldML/GPy)
|
||||
* [](http://opensource.org/licenses/BSD-3-Clause)
|
||||
[](http://depsy.org/package/python/GPy)
|
||||
|
||||
[](https://travis-ci.org/SheffieldML/GPy) [](https://ci.appveyor.com/project/mzwiessele/gpy/branch/deploy) [](https://coveralls.io/github/SheffieldML/GPy?branch=devel) [](http://codecov.io/github/SheffieldML/GPy?branch=devel) [](http://depsy.org/package/python/GPy) [](https://landscape.io/github/SheffieldML/GPy/devel)
|
||||
## Status
|
||||
|
||||
| Branch | travis-ci.org | ci.appveyor.com | coveralls.io | codecov.io |
|
||||
| --- | --- | --- | --- | --- |
|
||||
| Default branch (`devel`) | [](https://travis-ci.org/SheffieldML/GPy/branches) | [](https://ci.appveyor.com/project/mzwiessele/gpy/branch/devel) | [](https://coveralls.io/github/SheffieldML/GPy?branch=devel) | [](http://codecov.io/github/SheffieldML/GPy?branch=devel) |
|
||||
| Deployment branch (`deploy`) | [](https://travis-ci.org/SheffieldML/GPy/branches) | [](https://ci.appveyor.com/project/mzwiessele/gpy/branch/deploy) | [](https://coveralls.io/github/SheffieldML/GPy?branch=deploy) | [](http://codecov.io/github/SheffieldML/GPy?branch=deploy) |
|
||||
|
||||
## What's new:
|
||||
|
||||
|
|
@ -23,15 +29,16 @@ We welcome any contributions to GPy, after all it is an open source project. We
|
|||
|
||||
For an in depth description of pull requests, please visit https://help.github.com/articles/using-pull-requests/ .
|
||||
|
||||
### Steps to a successfull contribution:
|
||||
### Steps to a successful contribution:
|
||||
|
||||
1. Fork GPy: https://help.github.com/articles/fork-a-repo/
|
||||
2. Make your changes to the source in your fork.
|
||||
3. Make sure the [guidelines](#gl) are met.
|
||||
4. Set up tests to test your code. We are using unttests in the testing subfolder of GPy. There is a good chance that there is already a framework set up to test your new model in model_tests.py or kernel in kernel_tests.py. have a look at the source and you might be able to just add your model (or kernel or others) as an additional test in the appropriate file. There is more frameworks for testing the other bits and pieces, just head over to the testing folder and have a look.
|
||||
4. Set up tests to test your code. We are using unittests in the testing subfolder of GPy. There is a good chance
|
||||
that there is already a framework set up to test your new model in model_tests.py or kernel in kernel_tests.py. have a look at the source and you might be able to just add your model (or kernel or others) as an additional test in the appropriate file. There is more frameworks for testing the other bits and pieces, just head over to the testing folder and have a look.
|
||||
5. Create a pull request to the devel branch in GPy, see above.
|
||||
6. The tests will be running on your pull request. In the comments section we will be able to discuss the changes and help you with any problems. Let us know if there are any in the comments, so we can help.
|
||||
7. The pull request gets accepted and your awsome new feature will be in the next GPy release :)
|
||||
7. The pull request gets accepted and your awesome new feature will be in the next GPy release :)
|
||||
|
||||
For any further questions/suggestions head over to the issues section in GPy.
|
||||
|
||||
|
|
@ -45,11 +52,7 @@ For any further questions/suggestions head over to the issues section in GPy.
|
|||
|
||||
## Support and questions to the community
|
||||
|
||||
We have set up a mailing list for any questions you might have or problems you feel others have encountered:
|
||||
|
||||
gpy-users@lists.shef.ac.uk
|
||||
|
||||
Feel free to join the discussions on the issues section, too.
|
||||
Ask questions using the issues section.
|
||||
|
||||
## Updated Structure
|
||||
|
||||
|
|
@ -76,7 +79,7 @@ If that is the case, it is best to clean the repo and reinstall.
|
|||
[<img src="https://upload.wikimedia.org/wikipedia/commons/8/8e/OS_X-Logo.svg" height=40px>](http://www.apple.com/osx/)
|
||||
[<img src="https://upload.wikimedia.org/wikipedia/commons/3/35/Tux.svg" height=40px>](https://en.wikipedia.org/wiki/List_of_Linux_distributions)
|
||||
|
||||
Python 2.7, 3.5 and higher
|
||||
Python 3.5 and higher
|
||||
|
||||
## Citation
|
||||
|
||||
|
|
@ -93,7 +96,7 @@ We like to pronounce it 'g-pie'.
|
|||
|
||||
## Getting started: installing with pip
|
||||
|
||||
We are now requiring the newest version (0.16) of
|
||||
We are requiring a recent version (1.3.0 or later) of
|
||||
[scipy](http://www.scipy.org/) and thus, we strongly recommend using
|
||||
the [anaconda python distribution](http://continuum.io/downloads).
|
||||
With anaconda you can install GPy by the following:
|
||||
|
|
@ -111,7 +114,7 @@ And finally,
|
|||
|
||||
pip install gpy
|
||||
|
||||
We've also had luck with [enthought](http://www.enthought.com). Install scipy 0.16 (or later)
|
||||
We've also had luck with [enthought](http://www.enthought.com). Install scipy 1.3.0 (or later)
|
||||
and then pip install GPy:
|
||||
|
||||
pip install gpy
|
||||
|
|
@ -222,6 +225,13 @@ The documentation can be compiled as follows:
|
|||
sphinx-apidoc -o source/ ../GPy/
|
||||
make html
|
||||
|
||||
alternatively:
|
||||
|
||||
```{shell}
|
||||
cd doc
|
||||
sphinx-build -b html -d build/doctrees -D graphviz_dot='<path to dot>' source build/html
|
||||
```
|
||||
|
||||
The HTML files are then stored in doc/build/html
|
||||
|
||||
### Commit new patch to devel
|
||||
|
|
|
|||
10
appveyor.yml
10
appveyor.yml
|
|
@ -3,16 +3,18 @@ environment:
|
|||
secure: 8/ZjXFwtd1S7ixd7PJOpptupKKEDhm2da/q3unabJ00=
|
||||
COVERALLS_REPO_TOKEN:
|
||||
secure: d3Luic/ESkGaWnZrvWZTKrzO+xaVwJWaRCEP0F+K/9DQGPSRZsJ/Du5g3s4XF+tS
|
||||
gpy_version: 1.9.8
|
||||
gpy_version: 1.9.9
|
||||
matrix:
|
||||
- PYTHON_VERSION: 2.7
|
||||
MINICONDA: C:\Miniconda-x64
|
||||
- PYTHON_VERSION: 3.5
|
||||
MINICONDA: C:\Miniconda35-x64
|
||||
- PYTHON_VERSION: 3.6
|
||||
MINICONDA: C:\Miniconda36-x64
|
||||
- PYTHON_VERSION: 3.7
|
||||
MINICONDA: C:\Miniconda36-x64
|
||||
- PYTHON_VERSION: 3.8
|
||||
MINICONDA: C:\Miniconda36-x64
|
||||
- PYTHON_VERSION: 3.9
|
||||
MINICONDA: C:\Miniconda36-x64
|
||||
|
||||
#configuration:
|
||||
# - Debug
|
||||
|
|
@ -51,7 +53,7 @@ test_script:
|
|||
|
||||
after_test:
|
||||
# This step builds your wheels.
|
||||
- "python setup.py bdist_wheel bdist_wininst"
|
||||
- "python setup.py bdist_wheel"
|
||||
- codecov
|
||||
|
||||
artifacts:
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ if on_rtd:
|
|||
print(out)
|
||||
|
||||
#Lets regenerate our rst files from the source, -P adds private modules (i.e kern._src)
|
||||
proc = subprocess.Popen("sphinx-apidoc -P -f -o . ../../GPy", stdout=subprocess.PIPE, shell=True)
|
||||
proc = subprocess.Popen("sphinx-apidoc -M -P -f -o . ../../GPy", stdout=subprocess.PIPE, shell=True)
|
||||
(out, err) = proc.communicate()
|
||||
print("$ Apidoc:")
|
||||
print(out)
|
||||
|
|
@ -83,8 +83,13 @@ extensions = [
|
|||
#'sphinx.ext.coverage',
|
||||
'sphinx.ext.mathjax',
|
||||
'sphinx.ext.viewcode',
|
||||
'sphinx.ext.graphviz',
|
||||
'sphinx.ext.inheritance_diagram',
|
||||
]
|
||||
|
||||
#---sphinx.ext.inheritance_diagram config
|
||||
inheritance_graph_attrs = dict(rankdir="LR", dpi=1200)
|
||||
|
||||
#----- Autodoc
|
||||
#import sys
|
||||
#try:
|
||||
|
|
@ -134,7 +139,7 @@ master_doc = 'index'
|
|||
project = u'GPy'
|
||||
#author = u'`Humans <https://github.com/SheffieldML/GPy/graphs/contributors>`_'
|
||||
author = 'GPy Authors, see https://github.com/SheffieldML/GPy/graphs/contributors'
|
||||
copyright = u'2015, '+author
|
||||
copyright = u'2020, '+author
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
|
|
@ -245,6 +250,10 @@ html_theme = 'sphinx_rtd_theme'
|
|||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
html_css_files = [
|
||||
'wide.css',
|
||||
]
|
||||
|
||||
# Add any extra paths that contain custom files (such as robots.txt or
|
||||
# .htaccess) here, relative to this directory. These files are copied
|
||||
# directly to the root of the documentation.
|
||||
|
|
|
|||
|
|
@ -1,48 +1,90 @@
|
|||
.. GPy documentation master file, created by
|
||||
sphinx-quickstart on Fri Sep 18 18:16:28 2015.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
GPy - A Gaussian Process (GP) framework in Python
|
||||
=================================================
|
||||
|
||||
Welcome to GPy's documentation!
|
||||
===============================
|
||||
Introduction
|
||||
------------
|
||||
|
||||
`GPy <http://sheffieldml.github.io/GPy/>`_ is a Gaussian Process (GP) framework written in Python, from the Sheffield machine learning group.
|
||||
`GPy <http://sheffieldml.github.io/GPy/>`_ is a Gaussian Process (GP) framework written in Python, from the Sheffield machine learning group. It includes support for basic GP regression, multiple output GPs (using coregionalization), various noise models, sparse GPs, non-parametric regression and latent variables.
|
||||
|
||||
The `GPy homepage <http://sheffieldml.github.io/GPy/>`_ contains tutorials for users and further information on the project, including installation instructions.
|
||||
This documentation is mostly aimed at developers interacting closely with the code-base.
|
||||
|
||||
The documentation hosted here is mostly aimed at developers interacting closely with the code-base.
|
||||
|
||||
Source Code
|
||||
-----------
|
||||
|
||||
The code can be found on our `Github project page <https://github.com/SheffieldML/GPy>`_. It is open source and provided under the BSD license.
|
||||
|
||||
For developers:
|
||||
Installation
|
||||
------------
|
||||
|
||||
- `Writing new models <tuto_creating_new_models.html>`_
|
||||
- `Writing new kernels <tuto_creating_new_kernels.html>`_
|
||||
- `Write a new plotting routine using gpy_plot <tuto_plotting.html>`_
|
||||
- `Parameterization handles <tuto_parameterized.html>`_
|
||||
Installation instructions can currently be found on our `Github project page <https://github.com/SheffieldML/GPy>`_.
|
||||
|
||||
Contents:
|
||||
Tutorials
|
||||
---------
|
||||
|
||||
Several tutorials have been developed in the form of `Jupyter Notebooks <https://nbviewer.jupyter.org/github/SheffieldML/notebook/blob/master/GPy/index.ipynb>`_.
|
||||
|
||||
Architecture
|
||||
------------
|
||||
|
||||
GPy is a big, powerful package, with many features. The concept of how to use GPy in general terms is roughly as follows. A model (:py:class:`GPy.models`) is created - this is at the heart of GPy from a user perspective. A kernel (:py:class:`GPy.kern`), data and, usually, a representation of noise are assigned to the model. Specific models require, or can make use of, additional information. The kernel and noise are controlled by hyperparameters - calling the optimize (:py:class:`GPy.core.gp.GP.optimize`) method against the model invokes an iterative process which seeks optimal hyperparameter values. The model object can be used to make plots and predictions (:py:class:`GPy.core.gp.GP.predict`).
|
||||
|
||||
.. graphviz::
|
||||
|
||||
digraph GPy_Arch {
|
||||
|
||||
rankdir=LR
|
||||
node[shape="rectangle" style="rounded,filled" fontname="Arial"]
|
||||
edge [color="#006699" len=2.5]
|
||||
|
||||
Data->Model
|
||||
Hyperparameters->Kernel
|
||||
Hyperparameters->Noise
|
||||
Kernel->Model
|
||||
Noise->Model
|
||||
|
||||
Model->Optimize
|
||||
Optimize->Hyperparameters
|
||||
|
||||
Model->Predict
|
||||
Model->Plot
|
||||
|
||||
Optimize [shape="ellipse"]
|
||||
Predict [shape="ellipse"]
|
||||
Plot [shape="ellipse"]
|
||||
|
||||
subgraph cluster_0 {
|
||||
Data
|
||||
Kernel
|
||||
Noise
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: For developers
|
||||
|
||||
tuto_creating_new_models
|
||||
tuto_creating_new_kernels
|
||||
tuto_plotting
|
||||
tuto_parameterized
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: API Documentation
|
||||
|
||||
GPy.core
|
||||
GPy.core.parameterization
|
||||
GPy.models
|
||||
GPy.kern
|
||||
GPy.likelihoods
|
||||
GPy.mappings
|
||||
GPy.examples
|
||||
GPy.util
|
||||
GPy.plotting.gpy_plot
|
||||
GPy.plotting.matplot_dep
|
||||
GPy.core
|
||||
GPy.core.parameterization
|
||||
GPy.plotting
|
||||
GPy.inference.optimization
|
||||
GPy.inference.latent_function_inference
|
||||
GPy.inference.mcmc
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
||||
|
|
|
|||
|
|
@ -53,13 +53,15 @@ your code. The parameters have to be added by calling
|
|||
:py:class:`~GPy.core.parameterization.param.Param` objects as
|
||||
arguments::
|
||||
|
||||
from .core.parameterization import Param
|
||||
|
||||
def __init__(self,input_dim,variance=1.,lengthscale=1.,power=1.,active_dims=None):
|
||||
super(RationalQuadratic, self).__init__(input_dim, active_dims, 'rat_quad')
|
||||
assert input_dim == 1, "For this kernel we assume input_dim=1"
|
||||
self.variance = Param('variance', variance)
|
||||
self.lengthscale = Param('lengtscale', lengthscale)
|
||||
self.power = Param('power', power)
|
||||
self.add_parameters(self.variance, self.lengthscale, self.power)
|
||||
self.link_parameters(self.variance, self.lengthscale, self.power)
|
||||
|
||||
From now on you can use the parameters ``self.variance,
|
||||
self.lengthscale, self.power`` as normal numpy ``array-like`` s in your
|
||||
|
|
@ -71,13 +73,13 @@ automatically.
|
|||
|
||||
The implementation of this function is optional.
|
||||
|
||||
This functions deals as a callback for each optimization iteration. If
|
||||
one optimization step was successfull and the parameters (added by
|
||||
This functions is called as a callback upon each successful change to the parameters. If
|
||||
one optimization step was successfull and the parameters (linked by
|
||||
:py:func:`~GPy.core.parameterization.parameterized.Parameterized.link_parameters`
|
||||
``(*parameters)``) this callback function will be called to be able to
|
||||
update any precomputations for the kernel. Do not implement the
|
||||
gradient updates here, as those are being done by the model enclosing
|
||||
the kernel::
|
||||
``(*parameters)``) are changed, this callback function will be called. This callback may be used to
|
||||
update precomputations for the kernel. Do not implement the
|
||||
gradient updates here, as gradient updates are performed by the model enclosing
|
||||
the kernel. In this example, we issue a no-op::
|
||||
|
||||
def parameters_changed(self):
|
||||
# nothing todo here
|
||||
|
|
@ -90,8 +92,9 @@ the kernel::
|
|||
The implementation of this function in mandatory.
|
||||
|
||||
This function is used to compute the covariance matrix associated with
|
||||
the inputs X, X2 (np.arrays with arbitrary number of line (say
|
||||
:math:`n_1`, :math:`n_2`) and ``self.input_dim`` columns). ::
|
||||
the inputs X, X2 (np.arrays with arbitrary number of lines,
|
||||
:math:`n_1`, :math:`n_2`, corresponding to the number of samples over which to calculate covariance)
|
||||
and ``self.input_dim`` columns. ::
|
||||
|
||||
def K(self,X,X2):
|
||||
if X2 is None: X2 = X
|
||||
|
|
@ -171,16 +174,24 @@ is set to each ``param``. ::
|
|||
This function is required for GPLVM, BGPLVM, sparse models and uncertain inputs.
|
||||
|
||||
Computes the derivative of the likelihood with respect to the inputs
|
||||
``X`` (a :math:`n \times q` np.array). The result is returned by the
|
||||
function which is a :math:`n \times q` np.array. ::
|
||||
``X`` (a :math:`n \times q` np.array), that is, it calculates the quantity:
|
||||
|
||||
.. math::
|
||||
|
||||
\frac{\partial L}{\partial K} \frac{\partial K}{\partial X}
|
||||
|
||||
The partial derivative matrix is, in this case, comes out as an :math:`n \times q` np.array. ::
|
||||
|
||||
def gradients_X(self,dL_dK,X,X2):
|
||||
"""derivative of the covariance matrix with respect to X."""
|
||||
"""derivative of the likelihood with respect to X, calculated using dL_dK*dK_dX"""
|
||||
if X2 is None: X2 = X
|
||||
dist2 = np.square((X-X2.T)/self.lengthscale)
|
||||
|
||||
dX = -self.variance*self.power * (X-X2.T)/self.lengthscale**2 * (1 + dist2/2./self.lengthscale)**(-self.power-1)
|
||||
return np.sum(dL_dK*dX,1)[:,None]
|
||||
dK_dX = -self.variance*self.power * (X-X2.T)/self.lengthscale**2 * (1 + dist2/2./self.lengthscale)**(-self.power-1)
|
||||
return np.sum(dL_dK*dK_dX,1)[:,None]
|
||||
|
||||
Were the number of parameters to be larger than 1 or the number of dimensions likewise any larger
|
||||
than 1, the calculated partial derivitive would be a 3- or 4-tensor.
|
||||
|
||||
:py:func:`~GPy.kern.src.kern.Kern.gradients_X_diag` ``(self,dL_dKdiag,X)``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
[bumpversion]
|
||||
current_version = 1.9.8
|
||||
current_version = 1.9.9
|
||||
tag = True
|
||||
commit = True
|
||||
|
||||
|
|
|
|||
11
setup.py
11
setup.py
|
|
@ -117,6 +117,11 @@ try:
|
|||
except ModuleNotFoundError:
|
||||
ext_mods = []
|
||||
|
||||
install_requirements = ['numpy>=1.7', 'six', 'paramz>=0.9.0', 'cython>=0.29']
|
||||
if sys.version_info < (3, 6):
|
||||
install_requirements += ['scipy>=1.3.0,<1.5.0']
|
||||
else:
|
||||
install_requirements += ['scipy>=1.3.0']
|
||||
|
||||
setup(name = 'GPy',
|
||||
version = __version__,
|
||||
|
|
@ -164,7 +169,7 @@ setup(name = 'GPy',
|
|||
py_modules = ['GPy.__init__'],
|
||||
test_suite = 'GPy.testing',
|
||||
setup_requires = ['numpy>=1.7'],
|
||||
install_requires = ['numpy>=1.7', 'scipy>=0.16', 'six', 'paramz>=0.9.0'],
|
||||
install_requires = install_requirements,
|
||||
extras_require = {'docs':['sphinx'],
|
||||
'optional':['mpi4py',
|
||||
'ipython>=4.0.0',
|
||||
|
|
@ -182,9 +187,11 @@ setup(name = 'GPy',
|
|||
'Operating System :: MacOS :: MacOS X',
|
||||
'Operating System :: Microsoft :: Windows',
|
||||
'Operating System :: POSIX :: Linux',
|
||||
'Programming Language :: Python :: 2.7',
|
||||
'Programming Language :: Python :: 3.5',
|
||||
'Programming Language :: Python :: 3.6',
|
||||
'Programming Language :: Python :: 3.7',
|
||||
'Programming Language :: Python :: 3.8',
|
||||
'Programming Language :: Python :: 3.9',
|
||||
'Framework :: IPython',
|
||||
'Intended Audience :: Science/Research',
|
||||
'Intended Audience :: Developers',
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@
|
|||
|
||||
#!/usr/bin/env python
|
||||
import matplotlib
|
||||
matplotlib.use('agg', warn=False)
|
||||
matplotlib.use('agg')
|
||||
|
||||
import nose, warnings
|
||||
with warnings.catch_warnings():
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue