mirror of
https://github.com/SheffieldML/GPy.git
synced 2026-04-29 06:46:22 +02:00
v1.10.0 (#908)
* Update self.num_data in GP when X is updated
* Update appveyor.yml
* Update setup.cfg
* Stop using legacy bdist_wininst
* fix: reorder brackets to avoid an n^2 array
* Minor fix to multioutput regression example, to clarify code + typo.
* added missing import
* corrected typo in function name
* fixed docstring and added more explanation
* changed ordering of explanation to get to the point fast and provide additional details after
* self.num_data and self.input_dim are set dynamically in class GP() after the shape of X. In MRD, the user-specific values are passed around until X is defined.
* fixed technical description of gradients_X()
* brushed up wording
* fix normalizer
* fix ImportError in likelihood.py
in function log_predictive_density_sampling
* Update setup.py
bump min require version of scipy to 1.3.0
* Add cython into installation requirement
* Coregionalized regression bugfix (#824)
* route default arg W_rank correctly (Addresses #823)
* Drop Python 2.7 support (fix #833)
* travis, appveyor: Add Python 3.8 build
* README: Fix scipy version number
* setup.py: Install scipy < 1.5.0 when using Python 3.5
* plotting_tests.py: Use os.makedirs instead of matplotlib.cbook.mkdirs (fix #844)
* Use super().__init__ consistently, instead of sometimes calling base class __init__ directly
* README.md: Source formatting, one badge per line
* README.md: Remove broken landscape badge (fix #831)
* README.md: Badges for devel and deploy (fix #830)
* ignore itermediary sphinx restructured text
* ignore vs code project settings file
* add yml config for readthedocs
* correct path
* drop epub and pdf builds (as per main GPy)
* typo
* headings and structure
* update copyright
* restructuring and smartening
* remove dead links
* reorder package docs
* rst "markup"
* change rst syntax
* makes sense for core to go first
* add placeholder
* initial core docs, class diagram
* lower level detail
* higher res diagrams
* layout changes for diagrams
resolve conflict
* better syntax
* redunant block
* introduction
* inheritance diagrams
* more on models
* kernel docs to kern.src
* moved doc back from kern.src to kern
* kern not kern.src in index
* better kernel description
* likelihoods
* placeholder
* add plotting to docs index
* summarise plotting
* clarification
* neater contents
* architecture diagram
* using pods
* build with dot
* more on examples
* introduction for utils package
* compromise formatting for sphinx
* correct likelihod definition
* parameterization of priors
* latent function inference intro and format
* maint: Remove tabs (and some trailing spaces)
* dpgplvm.py: Wrap long line + remove tabs
* dpgplvm.py: Fix typo in the header
* maint: Wrap very long lines (> 450 chars)
* maint: Wrap very long lines (> 400 chars)
* Add the link to the api doc on the readme page.
* remove deprecated parameter
* Update README.md
* new: Added to_dict() method to Ornstein-Uhlenbeck (OU) kernel
* fix: minor typos in README !minor
* added python 3.9 build following 4aa2ea9f5e to address https://github.com/SheffieldML/GPy/issues/881
* updated cython-generated c files for python 3.9 via `pyenv virtualenv 3.9.1 gpy391 && pyenv activate gpy391 && python setup.py build --force
* updated osx to macOS 10.15.7, JDK to 14.0.2, and XCode to Xcode 12.2 (#904)
The CI was broken. This commit fixes the CI. The root cause is reported in more detail in issue #905.
In short, the default macOS version (10.13, see the TravisCI docs) used in TravisCI isn't supported by brew which caused the brew install pandoc in the download_miniconda.sh pre-install script to hang and time out the build. It failed even on inert PRs (adding a line to README, e.g.). Now, with the updated macOS version (from 10.13 to 10.15), brew is supported and the brew install pandoc command succeeds and allows the remainder of the CI build and test sequence to succeed.
* incremented version
Co-authored-by: Masha Naslidnyk 🦉 <naslidny@amazon.co.uk>
Co-authored-by: Zhenwen Dai <zhenwendai@users.noreply.github.com>
Co-authored-by: Hugo van Kemenade <hugovk@users.noreply.github.com>
Co-authored-by: Mark McLeod <mark.mcleod@mindfoundry.ai>
Co-authored-by: Sigrid Passano Hellan <sighellan@gmail.com>
Co-authored-by: Antoine Blanchard <antoine@sand-lab-gpu.mit.edu>
Co-authored-by: kae_mihara <rukamihara@outlook.com>
Co-authored-by: lagph <49130858+lagph@users.noreply.github.com>
Co-authored-by: Julien Bect <julien.bect@centralesupelec.fr>
Co-authored-by: Neil Lawrence <ndl21@cam.ac.uk>
Co-authored-by: bobturneruk <bob.turner.uk@gmail.com>
Co-authored-by: bobturneruk <r.d.turner@sheffield.ac.uk>
Co-authored-by: gehbiszumeis <16896724+gehbiszumeis@users.noreply.github.com>
This commit is contained in:
parent
92f2e87e7b
commit
fa909768bd
72 changed files with 8568 additions and 14545 deletions
|
|
@ -53,13 +53,15 @@ your code. The parameters have to be added by calling
|
|||
:py:class:`~GPy.core.parameterization.param.Param` objects as
|
||||
arguments::
|
||||
|
||||
from .core.parameterization import Param
|
||||
|
||||
def __init__(self,input_dim,variance=1.,lengthscale=1.,power=1.,active_dims=None):
|
||||
super(RationalQuadratic, self).__init__(input_dim, active_dims, 'rat_quad')
|
||||
assert input_dim == 1, "For this kernel we assume input_dim=1"
|
||||
self.variance = Param('variance', variance)
|
||||
self.lengthscale = Param('lengtscale', lengthscale)
|
||||
self.power = Param('power', power)
|
||||
self.add_parameters(self.variance, self.lengthscale, self.power)
|
||||
self.link_parameters(self.variance, self.lengthscale, self.power)
|
||||
|
||||
From now on you can use the parameters ``self.variance,
|
||||
self.lengthscale, self.power`` as normal numpy ``array-like`` s in your
|
||||
|
|
@ -71,13 +73,13 @@ automatically.
|
|||
|
||||
The implementation of this function is optional.
|
||||
|
||||
This functions deals as a callback for each optimization iteration. If
|
||||
one optimization step was successfull and the parameters (added by
|
||||
This functions is called as a callback upon each successful change to the parameters. If
|
||||
one optimization step was successfull and the parameters (linked by
|
||||
:py:func:`~GPy.core.parameterization.parameterized.Parameterized.link_parameters`
|
||||
``(*parameters)``) this callback function will be called to be able to
|
||||
update any precomputations for the kernel. Do not implement the
|
||||
gradient updates here, as those are being done by the model enclosing
|
||||
the kernel::
|
||||
``(*parameters)``) are changed, this callback function will be called. This callback may be used to
|
||||
update precomputations for the kernel. Do not implement the
|
||||
gradient updates here, as gradient updates are performed by the model enclosing
|
||||
the kernel. In this example, we issue a no-op::
|
||||
|
||||
def parameters_changed(self):
|
||||
# nothing todo here
|
||||
|
|
@ -90,8 +92,9 @@ the kernel::
|
|||
The implementation of this function in mandatory.
|
||||
|
||||
This function is used to compute the covariance matrix associated with
|
||||
the inputs X, X2 (np.arrays with arbitrary number of line (say
|
||||
:math:`n_1`, :math:`n_2`) and ``self.input_dim`` columns). ::
|
||||
the inputs X, X2 (np.arrays with arbitrary number of lines,
|
||||
:math:`n_1`, :math:`n_2`, corresponding to the number of samples over which to calculate covariance)
|
||||
and ``self.input_dim`` columns. ::
|
||||
|
||||
def K(self,X,X2):
|
||||
if X2 is None: X2 = X
|
||||
|
|
@ -171,16 +174,24 @@ is set to each ``param``. ::
|
|||
This function is required for GPLVM, BGPLVM, sparse models and uncertain inputs.
|
||||
|
||||
Computes the derivative of the likelihood with respect to the inputs
|
||||
``X`` (a :math:`n \times q` np.array). The result is returned by the
|
||||
function which is a :math:`n \times q` np.array. ::
|
||||
``X`` (a :math:`n \times q` np.array), that is, it calculates the quantity:
|
||||
|
||||
.. math::
|
||||
|
||||
\frac{\partial L}{\partial K} \frac{\partial K}{\partial X}
|
||||
|
||||
The partial derivative matrix is, in this case, comes out as an :math:`n \times q` np.array. ::
|
||||
|
||||
def gradients_X(self,dL_dK,X,X2):
|
||||
"""derivative of the covariance matrix with respect to X."""
|
||||
"""derivative of the likelihood with respect to X, calculated using dL_dK*dK_dX"""
|
||||
if X2 is None: X2 = X
|
||||
dist2 = np.square((X-X2.T)/self.lengthscale)
|
||||
|
||||
dX = -self.variance*self.power * (X-X2.T)/self.lengthscale**2 * (1 + dist2/2./self.lengthscale)**(-self.power-1)
|
||||
return np.sum(dL_dK*dX,1)[:,None]
|
||||
dK_dX = -self.variance*self.power * (X-X2.T)/self.lengthscale**2 * (1 + dist2/2./self.lengthscale)**(-self.power-1)
|
||||
return np.sum(dL_dK*dK_dX,1)[:,None]
|
||||
|
||||
Were the number of parameters to be larger than 1 or the number of dimensions likewise any larger
|
||||
than 1, the calculated partial derivitive would be a 3- or 4-tensor.
|
||||
|
||||
:py:func:`~GPy.kern.src.kern.Kern.gradients_X_diag` ``(self,dL_dKdiag,X)``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue