# 主函数

## gp_set

type - Type of Gaussian process

• ‘FULL’ full GP (default)
• ‘FIC’ fully independent conditional sparse approximation（需要inducing point X_u
• ‘PIC’ partially independent conditional sparse approximation
• ‘CS+FIC’ compact support + FIC model sparse approximation
• ‘DTC’ deterministic training conditional sparse approximation
• ‘SOR’ subset of regressors sparse approximation
• ‘VAR’ variational sparse approximation

infer_params - String defining which parameters are inferred. The default is covariance+likelihood.

• ‘covariance’ = infer parameters of the covariance functions
• ‘likelihood’ = infer parameters of the likelihood
• ‘inducing’ = infer inducing inputs (in sparse approximations): W = gp.X_u(😃
• ‘covariance+likelihood’ = infer covariance function and likelihood parameters （有什么具体的区别？不是很明白）
• ‘covariance+inducing’ = infer covariance function parameters and inducing inputs
• ‘covariance+likelihood+inducing’

The additional fields when the likelihood is not Gaussian (lik is not lik_gaussian or lik_gaussiansm) are:

latent_method - Method for marginalizing over latent values （什么意思？）. Possible methods are ‘Laplace’ (default), ‘EP’ and ‘MCMC’.
latent_opt - Additional option structure for the chosen latent method. See default values for options below.

• ‘MCMC’
• method - Function handle to function which samples the latent values @esls (default), @scaled_mh or @scaled_hmc
• f - 1xn vector of latent values. The default is [].
• ‘Laplace’
• optim_method - Method to find the posterior mode: ‘newton’ (default except for lik_t), ‘stabilized-newton’, ‘fminuc_large’, or ‘lik_specific’ (applicable and default for lik_t)
• tol
• ‘EP’
• ‘robust-EP’

The additional fields needed in sparse approximations are:

X_u - Inducing inputs, no default, has to be set when FIC, PIC, PIC_BLOCK, VAR, DTC, or SOR is used.

Xu_prior - Prior for inducing inputs. The default is prior_unif.

# 我要用的 likelihood function

## lik_t Create a Student-t likelihood structure

Parameters for Student-t likelihood [default]

• sigma2 - scale squared 
• nu - degrees of freedom  （这是 degree of freedom 通常是固定的）
• sigma2_prior - prior for sigma2 [prior_logunif] （为什么是logunif?）
• nu_prior - prior for nu [prior_fixed]
• Note! If the prior is ‘prior_fixed’ then the parameter in question is considered fixed and it is not handled in optimization, grid integration, MCMC etc.

## lik_gaussiansmt Create a Gaussian scale mixture likelihood structure with priors producing approximation of the Student’s t

The parameters of this likelihood can be inferred only by Gibbs sampling by calling GP_MC.

# 我要用的 covariance function

## gpcf_sexp Create a squared exponential (exponentiated quadratic) covariance function

• magnSigma2 - magnitude (squared) [0.1]
• lengthScale - length scale for each input.  This can be either scalar - corresponding to an isotropic function or vector defining own length-scale for - each input direction. （为每个输入定义不同的length scale，自动选择）
• magnSigma2_prior - prior for magnSigma2 [prior_logunif] （为什么是logunif？保正？）
• lengthScale_prior - prior for lengthScale [prior_t] （为什么是prior_t？不需要保正？）
• metric - metric structure used by the covariance function [] （不懂）
• selectedVariables - vector defining which inputs are used [all] selectedVariables is short hand for using metric_euclidean with corresponding components
• kalman_deg - Degree of approximation in type ‘KALMAN’ （不懂）

### 子函数

gpcf_sexp_lp: Evaluate the log prior of covariance function parameters, returns $\log p(\theta)$

# 我要用的 priors

## prior_t Student-t prior structure

Parameters for Student-t prior [default]

• mu - location 
• s2 - scale 
• nu - degrees of freedom 
• mu_prior - prior for mu [prior_fixed] （这里居然是 fixed，为什么？是否合理？）
• s2_prior - prior for s2 [prior_fixed] （这里居然是 fixed，为什么？是否合理？）
• nu_prior - prior for nu [prior_fixed] （这里居然是 fixed，为什么？是否合理？）

# Other hidden functions

## gp_eg calls gp_e, gp_g

• GP_EG: Evaluate the energy function (un-normalized negative marginal log posterior) and its gradient
• GP_E: Evaluate the energy function (un-normalized negative log marginal posterior)
• GP_G: Evaluate the gradient of energy (GP_E) for Gaussian Process

The energy is minus log posterior cost function:

$E = EDATA + EPRIOR - \log p(\bm{Y}|\bm{X},\theta) - \log p(\theta)$

where $\theta$ represents the parameters (lengthScale, magnSigma2…), $\bm{X}$ is inputs and $\bm{Y}$ is observations (regression) or latent values (non-Gaussian likelihood).

# 一些结论

• 不用 lik_gaussiansmt