Releases: GAMES-UChile/mogptk
Releases · GAMES-UChile/mogptk
v0.5.1
- Added sampling to all likelihoods
- Add exact variational expectation implementation for exponential likelihood
- Add exact variational expectation implementation for Poisson likelihood
- Add exact variational expectation implementation for Gamma likelihood
- Add sampling y values
- Bug fixes in likelihoods
Added mathematical notes to repository at: https://github.com/GAMES-UChile/mogptk/blob/master/notes/notes.pdf
v0.5.0
- Many many bug fixes
- Revise all examples in the documentation
- Revert to default
float64
dtype instead offloat32
to avoid precision errors - Improve verbose training output
- Improve plots slightly
- Add
mogptk.gpr.MultiOutputMean
, support for different mean functions for each output dimension - Make all randomness come from PyTorch and not also numpy
- Revert memory optimization for exact model to avoid Cholesky problems
- Add prediction confidence intervals and likelihood sampling
v0.4.0
- Use
inference_mode
instead ofno_grad
, slight increase in performance - Big change: all classes now inherit from
torch.nn.Module
ortorch.nn.Parameter
; refactoring of parameters; removal of naming for kernels/means/likelihoods - Rename copy_parameters to load_kernel_parameters; get_parameters to parameters; add str method to model
- Optimization: set grad to None instead of zero
- Optimization: reduce memory of Exact model by not preallocating the 'eye' tensor for the Gram matrix
- Support
torch.jit
by default for training, improves performance by about 10% after the first two iterations (warmup)
v0.3.5
- Improve accuracy of time interval to display training progress
- Add error when prediction data is of wrong shape
- Accept torch.Tensor for Data and DataSet
- Bugfixes and 10% speed improvement due to caching parameter transformation
- Default to float32 instead of float64 for PyTorch, this will reduce memory usage by default. Use
mogptk.gpr.use_double_precision()
to revert to the previous default.
v0.3.4
v0.3.2
v0.3.1
- Fix conversions to/from GPU
- Fix error on
plot_losses()
- Rename
gpr.PhiKernel
asgpr.FunctionKernel
- Add kernel shortcuts such as
mogptk.Kernels.SpectralMixture
- Include end point when calling
Data.remove_range()
- Fix input dimensions for
AddKernel
andMulKernel
- Add
sigma
andfigsize
arguments toModel.plot_prediction()
v0.3.0
Features
- Support for variational and sparse models
- Support for multi output (heterogeneous) likelihoods, i.e. different likelihoods for each channel
- New models:
Snelson
,OpperArchambeau
,Titsias
,Hensman
- New kernels:
Constant
,White
,Exponential
,LocallyPeriodic
,Cosine
,Sinc
- New likelihoods:
StudentT
,Exponential
,Laplace
,Bernoulli
,Beta
,Gamma
,Poisson
,Weibull
,LogLogistic
,LogGaussian
,ChiSquared
- New mean functions:
Constant
andLinear
- Allow kernels to be added and multiplied (i.e.
K1 + K2
orK1 * K2
) Data
andDataSet
now accept more data types as input, such as pandas seriesData
,DataSet
, andModel
plot functionalities return the figure and axes to allow customization- Support sampling (prior or posterior) from the model
- Add the MOHSM kernel: multi-output harmonic spectral mixture kernel (Altamirano 2021)
- Parameters can be pegged to other parameters, essentially removing them from training
- Exact model supports training with known data point variances and draw their error bars in plots
Improvements
- Jitter added to the diagonal before calculating the Cholesky is now relative to the average value of the diagonal, this improves numeric stability for all kernels irrespective of the actual numerical magnitude of the values
- Kernels now implement
K_diag
that returns the kernel diagonal for better performance - BNSE initialization method has been reimplemented with improved performance and stability
- Parameter initialization for all models from different initialization methods has been much improved
- Induction point initialization now support
random
orgrid
ordensity
SpectralMixture
(in addition toSpectral
),MultiOutputSpectralMixture
(in addition toMultiOutputSpectral
) with higher performance- Allow mixing of single-output and multi-output kernels using active
- All plotting functions have been restyled
- Model training allows custom error function for calculation at each iteration
- Support single and cross lengthscales for the
SquaredExponential
,RationalQuadratic
,Periodic
,LocallyPeriodic
kernels - Add AIC and BIC methods to model
- Add
model.plot_correlation()
Changes
- Remove
rescale_x
Parameter.trainable
=>Parameter.train
- Kernels are by default initialized deterministically and not random, however the models (MOSM, MOHSM, CONV, CSM, SM-LMC, and SM) are still initialized randomly by default
- Plotting predictions happens from the model no the data:
model.plot_prediction()
instead ofmodel.predict(); data.plot()
v0.2.5
v0.2.4
- Set maximum frequency to Nyquist in MOSM, CSM, SM-LMC, and SM; fixes #21
- Improve CholeskyException messaging
- Update the GONU example
- Fix Sigmoid.backward, fixes #25
- Add support for multiple input dimensions for remove_range, fixes #24
- Fix SM model initialization for IPS
- Data now permits different dtypes per input dimension for X, LoadFunction now works for multi input dimensions, upgrading time delta for datetime64 now fixed
- Change X from (n,input_dims) to [(n,)] * input_dims
- Add dim to functions to specify input dimension
- Fix example 06
- Fix old import path, fixes #27
- Reuse torch.eye in log_marginal_likelihood
- Make rescale_x optional for models, see #28; return losses and errors from train()