almosallam_heteroscedastic_2017
Heteroscedastic Gaussian processes for uncertain and incomplete data
Ibrahim Almosallam
PhD Thesis, University of Oxford,
https://ora.ox.ac.uk/objects/uuid:6a3b600d-5759-456a-b785-5f89cf4ede6d
Some useful identities:
p ( x ) = ∫ p ( x , y ) d y = ∫ p ( x ∣ y ) p ( y ) d y (A.6) p(x) = \int p(x,y)\rm{d}y = \int p(x|y)p(y)\rm{d}y \tag{A.6}
p ( x ) = ∫ p ( x , y ) d y = ∫ p ( x ∣ y ) p ( y ) d y ( A . 6 )
p ( x ) (1) p(x) \tag{1}
p ( x ) ( 1 )
N ( x ∣ a , A ) N ( x ∣ b , B ) = N ( a ∣ b , A + B ) N ( x ∣ c , C ) (A.14) \mathcal{N}(\bm{x}|\bm{a},\bm{A}) \mathcal{N}(\bm{x}|\bm{b},\bm{B}) = \mathcal{N}(\bm{a}|\bm{b},\bm{A+B})\mathcal{N}(\bm{x}|\bm{c},\bm{C})\tag{A.14}
N ( x ∣ a , A ) N ( x ∣ b , B ) = N ( a ∣ b , A + B ) N ( x ∣ c , C ) ( A . 1 4 )
c = C ( A − 1 a + B − 1 b ) , C = ( A − 1 + B − 1 ) − 1 \bm{c}=\bm{C}(\bm{A}^{-1}\bm{a}+\bm{B}^{-1}\bm{b}),\quad \bm{C}=(\bm{A}^{-1}+\bm{B}^{-1})^{-1} c = C ( A − 1 a + B − 1 b ) , C = ( A − 1 + B − 1 ) − 1
p ( x . . . . . . . . . . . . . (A.11) p(\bm{x}............. \tag{A.11}
p ( x . . . . . . . . . . . . . ( A . 1 1 )
2 Gaussian Processes for Regression
2.6 Summary
We have shown in this chapter that basis function models are sparse Gaussian processes of type SoR, but with a different prior on the latent variables.
BFM 的 latent variables 的 prior 与 SoR 不同,其它相同。
A main distinction between the way ANNs are traditionally optimised and how BFMs are trained, is that the former performs a regularised maximum likelihood estimation, or maximum a posterior (MAP), while the latter maximises the marginal likelihood. Maximising the marginal likelihood is less prone to overfitting, due to the additional cost term that minimises the variance on the weight vector w, which we have shown is related to the sparsity constraint in sparse auto-encoders. In the next chapter, we show how we can use this view of sparse GPs to extend their modelling capabilities.
??Maximum likelihood estimation 和 MAP 是同一个概念(核实一下)
所以 BFM 不容易过拟合,根本上是因为使用了 maximum MARGINAL LIKELIHOOD。
和 ANN 与 AE 才能扯上点关系。
2.1 Full Gaussian Processes
The dataset is D = { X , y } \mathcal{D}=\{\bm{X},\bm{y}\} D = { X , y } , consisting of input X = { x i } i = 1 n ∈ R d × n \bm{X}=\{\bm{x}_i\}_{i=1}^n \in \mathbb{R}^{d\times n} X = { x i } i = 1 n ∈ R d × n and target outputs y = { y i } i = 1 n ∈ R n \bm{y}=\{y_i\}_{i=1}^n \in \mathbb{R}^n y = { y i } i = 1 n ∈ R n .
d d d is the dimensionality of the input x i \bm{x}_i x i
n n n is the number of data points
ASSUMPTION: observed target y i y_i y i is generated by a function of x i \bm{x}_i x i plus additive noise ε i \varepsilon_i ε i :
y i = f ( x i ) + ε i (2.1) y_i = f(\bm{x_i}) + \varepsilon_i \tag{2.1}
y i = f ( x i ) + ε i ( 2 . 1 )
ε i ∼ N ( 0 , σ 2 ) , ∀ i ∈ [ 0 , n ] \varepsilon_i \sim \mathcal{N}(0,\sigma^2),\quad \forall i\in[0,n] ε i ∼ N ( 0 , σ 2 ) , ∀ i ∈ [ 0 , n ]
LIKELIHOOD: the probability of observing the targets,
p ( y ∣ f x , σ 2 ) = N ( y ∣ f x , σ 2 I n ) (2.2) p(\bm{y} | \bm{f}_\bm{x},\sigma^2) = \mathcal{N}(\bm{y} | \bm{f}_\bm{x}, \sigma^2 I_n) \tag{2.2}
p ( y ∣ f x , σ 2 ) = N ( y ∣ f x , σ 2 I n ) ( 2 . 2 )
f x = [ f ( x 1 ) , ⋯ , f ( x n ) ] T \bm{f}_\bm{x} = [f(\bm{x}_1),\cdots,f(\bm{x}_n)]^T f x = [ f ( x 1 ) , ⋯ , f ( x n ) ] T
POSTERIOR (?): Distribution of f x \bm{f}_\bm{x} f x given the observations
p ( f x ∣ y , X , σ 2 ) = p ( y ∣ f x , σ 2 ) ⋅ p ( f x ∣ X ) p ( y ∣ X , σ 2 ) (2.3) p(\bm{f}_\bm{x} | \bm{y},\bm{X},\sigma^2) = \frac{p(\bm{y}|\bm{f}_\bm{x},\sigma^2) \cdot p(\bm{f}_\bm{x}|\bm{X})} {p(\bm{y}|\bm{X},\sigma^2)} \tag{2.3}
p ( f x ∣ y , X , σ 2 ) = p ( y ∣ X , σ 2 ) p ( y ∣ f x , σ 2 ) ⋅ p ( f x ∣ X ) ( 2 . 3 )
PRIOR: over the space of functions,
p ( f X ∣ X ) = N ( 0 , K ) p(\bm{f}_\bm{X}|\bm{X}) = \mathcal{N}(0,\bm{K})
p ( f X ∣ X ) = N ( 0 , K )
K = κ ( X , X ) \bm{K} = \kappa(\bm{X},\bm{X}) K = κ ( X , X ) , symmetric and positive semi-definite
κ ( x i , x j ) \kappa(\bm{x}_i,\bm{x}_j) κ ( x i , x j ) is the covariance/kernel function, for example, Mercer kernels.
MARGINAL LIKELIHOOD, computed from Eq.(A.6) and (A.14):
p ( y ∣ X , θ ) = ∫ p ( y ∣ f x , θ ) ⋅ p ( f x ∣ X , θ ) ⋅ d f x = N ( y ∣ 0 , K + σ 2 I n ) (2.7) p(\bm{y}|\bm{X},\bm{\theta}) = \int p(\bm{y}|\bm{f}_\bm{x},\bm{\theta}) \cdot p(\bm{f}_\bm{x}|\bm{X},\bm{\theta}) \cdot \rm{d} \bm{f}_\bm{x} = \mathcal{N}(\bm{y}|0,\bm{K}+\sigma^2 I_n) \tag{2.7}
p ( y ∣ X , θ ) = ∫ p ( y ∣ f x , θ ) ⋅ p ( f x ∣ X , θ ) ⋅ d f x = N ( y ∣ 0 , K + σ 2 I n ) ( 2 . 7 )
θ = { φ , σ 2 } \bm{\theta} = \{\bm{\varphi},\sigma^2\} θ = { φ , σ 2 } , hyper-parameters of the GP model
φ \bm{\varphi} φ , parameters of the kernel
σ 2 \sigma^2 σ 2 , the noise variance
ASSUMPTION: the joint distribution is a multivariate Gaussian,
p ( y , f ∗ ∣ X , X ∗ , θ ) = N ( [ y f ∗ ] ∣ 0 , [ K x x + σ 2 I n K x ∗ K ∗ x K ∗ ∗ ] ) (2.9) p(\bm{y},\bm{f}_*|\bm{X},\bm{X}_*,\bm{\theta})
= \mathcal{N} \left( \left.\left[\begin{matrix}\bm{y}\\\bm{f}_*\end{matrix}\right]\right| \bm{0}, \left[\begin{matrix}\bm{K}_{\bm{xx}}+\sigma^2 I_n & \bm{K}_{\bm{x}*}\\\bm{K}_{*\bm{x}} & \bm{K}_{**}\end{matrix}\right] \right) \tag{2.9} p ( y , f ∗ ∣ X , X ∗ , θ ) = N ( [ y f ∗ ] ∣ ∣ ∣ ∣ ∣ 0 , [ K x x + σ 2 I n K ∗ x K x ∗ K ∗ ∗ ] ) ( 2 . 9 )
CONDITIONAL PROBABILITY of function value f ∗ \bm{f}_* f ∗ , computed from Eq.(A.11),
p ( f ∗ ∣ X ∗ , D , θ ) = N ( f ∗ ∣ E [ f ∗ ] , V [ f ∗ ] ) (2.10) p(\bm{f}_*|\bm{X}_*,\mathcal{D},\bm{\theta}) = \mathcal{N} (\bm{f}_* | \mathbb{E}[\bm{f}_*],\mathbb{V}[\bm{f}_*]) \tag{2.10}
p ( f ∗ ∣ X ∗ , D , θ ) = N ( f ∗ ∣ E [ f ∗ ] , V [ f ∗ ] ) ( 2 . 1 0 )
E [ f ∗ ] = K ∗ x ( K x x + σ 2 I n ) − 1 y \mathbb{E}[\bm{f}_*] = \bm{K}_{*\bm{x}}(\bm{K}_{\bm{xx}}+\sigma^2 I_n)^{-1}\bm{y}
E [ f ∗ ] = K ∗ x ( K x x + σ 2 I n ) − 1 y
V [ f ∗ ] = K ∗ ∗ − K ∗ x ( K x x + σ 2 I n ) − 1 K x ∗ \mathbb{V}[\bm{f}_*] = \bm{K}_{**}-\bm{K}_{*\bm{x}}(\bm{K}_\bm{xx}+\sigma^2 I_n)^{-1}\bm{K}_{\bm{x}*}
V [ f ∗ ] = K ∗ ∗ − K ∗ x ( K x x + σ 2 I n ) − 1 K x ∗
PREDICTIVE DISTRIBUTION of future observations y ∗ \bm{y}_* y ∗ given the dataset D \mathcal{D} D , computed from Eq.(A.4) and (A.16),
p ( y ∗ ∣ X ∗ , D , θ ) = ∫ p ( y ∗ ∣ f ∗ , θ ) ⋅ p ( f ∗ ∣ X ∗ , D , θ ) ⋅ d f ∗ = N ( y ∗ ∣ E [ y ∗ ] , V [ y ∗ ] ) (2.13) p(\bm{y}_*|\bm{X}_*,\mathcal{D},\bm{\theta}) = \int p(\bm{y}_*|\bm{f}_*,\bm{\theta}) \cdot p(\bm{f}_*|\bm{X}_*,\mathcal{D},\bm{\theta}) \cdot \rm{d}\bm{f}_* = \mathcal{N}(\bm{y}_* | \mathbb{E}[\bm{y}_*],\mathbb{V}[\bm{y}_*]) \tag{2.13}
p ( y ∗ ∣ X ∗ , D , θ ) = ∫ p ( y ∗ ∣ f ∗ , θ ) ⋅ p ( f ∗ ∣ X ∗ , D , θ ) ⋅ d f ∗ = N ( y ∗ ∣ E [ y ∗ ] , V [ y ∗ ] ) ( 2 . 1 3 )
E [ y ∗ ] = E [ f ∗ ] \mathbb{E}[\bm{y}_*] = \mathbb{E}[\bm{f}_*]
E [ y ∗ ] = E [ f ∗ ]
V [ y ∗ ] = V [ f ∗ ] + σ 2 I n ∗ \mathbb{V}[\bm{y}_*] = \mathbb{V}[\bm{f}_*] + \sigma^2 I_{n*}
V [ y ∗ ] = V [ f ∗ ] + σ 2 I n ∗
The hyperparameters of the GP model are optimized by maximizing the log-marginal likelihood, Eq.(2.7):
ln p ( y ∣ X , θ ) = − 1 2 y T ( K + σ 2 I n ) − 1 y − 1 2 ln ∣ K + σ 2 I n ∣ − n 2 ln ( 2 π ) (2.17) \ln p(\bm{y}|\bm{X},\bm{\theta}) = -\frac{1}{2} \bm{y}^T(\bm{K}+\sigma^2 I_n)^{-1}\bm{y} - \frac{1}{2} \ln |\bm{K}+\sigma^2 I_n| - \frac{n}{2}\ln(2\pi) \tag{2.17}
ln p ( y ∣ X , θ ) = − 2 1 y T ( K + σ 2 I n ) − 1 y − 2 1 ln ∣ K + σ 2 I n ∣ − 2 n ln ( 2 π ) ( 2 . 1 7 )
Almost the same description and notations to the GPML book (Rasmussen and Nickisch, 2010).
2.2 Numerical Approximation
Low Rank Approximation
Σ p = σ − 2 K p x K x p + K p p (2.23) \Sigma_p = \sigma^{-2}K_{px}K_{xp} + K_{pp} \tag{2.23}
Σ p = σ − 2 K p x K x p + K p p ( 2 . 2 3 )
Conjugate Gradient Method
2.3 Sparse Gaussian Processes
2.3.1 Subset of Regressor (SoR)
INDUCING / LATENT variables f p \bm{f}_p f p : p ( f p ) = N ( f p ∣ 0 , K p p ) p(\bm{f}_p)=\mathcal{N}(\bm{f}_p|0,K_{pp}) p ( f p ) = N ( f p ∣ 0 , K p p ) .
JOINT distribution:
p ( f x , f p ) = ∫ p ( f x , f ∗ ∣ f p ) p ( f p ) ⋅ d f p (2.24) p(\bm{f}_x,\bm{f}_p) = \int p(\bm{f}_x,\bm{f}_*|\bm{f}_p) p(\bm{f}_p) \cdot {\rm d}\bm{f}_p \tag{2.24}
p ( f x , f p ) = ∫ p ( f x , f ∗ ∣ f p ) p ( f p ) ⋅ d f p ( 2 . 2 4 )
ASSUMPTION: the conditional distributions of f x \bm{f}_x f x and f ∗ \bm{f}_{*} f ∗ are independent given f p \bm{f}_p f p .
p ( f x , f p ) ≈ q ( f x , f p ) = ∫ q ( f x ∣ f p ) q ( f ∗ ∣ f p ) p ( f p ) ⋅ d f p (2.25) p(\bm{f}_x,\bm{f}_p) \approx q(\bm{f}_x,\bm{f}_p) = \int q(\bm{f}_x|\bm{f}_p) q(\bm{f}_*|\bm{f}_p) p(\bm{f}_p) \cdot {\rm d}\bm{f_p} \tag{2.25}
p ( f x , f p ) ≈ q ( f x , f p ) = ∫ q ( f x ∣ f p ) q ( f ∗ ∣ f p ) p ( f p ) ⋅ d f p ( 2 . 2 5 )
The exact expression for the training condition and testing condition can be derived from a noise-free GP.
Under the ASSUMPTION that both the training and the test conditionals are deterministic:
q ( f x ∣ f p ) = N ( K x p K p p − 1 f p , 0 ) (2.28) q(\bm{f}_x|\bm{f}_p)=\mathcal{N}(K_{xp}K_{pp}^{-1}\bm{f}_p,0) \tag{2.28}
q ( f x ∣ f p ) = N ( K x p K p p − 1 f p , 0 ) ( 2 . 2 8 )
q ( f ∗ ∣ f p ) = N ( K ∗ p K p p − 1 f p , 0 ) (2.29) q(\bm{f}_*|\bm{f}_p)=\mathcal{N}(K_{*p}K_{pp}^{-1}\bm{f}_p,0) \tag{2.29}
q ( f ∗ ∣ f p ) = N ( K ∗ p K p p − 1 f p , 0 ) ( 2 . 2 9 )
Effective prior:
q ( f x , f ∗ ) = N ( [ f x ; f ∗ ] ∣ 0 , [ Q x x , Q x ∗ ; Q ∗ x , Q ∗ ∗ ] ) (2.30) q(\bm{f}_x,\bm{f}_*) = \mathcal{N}([f_x;f_*]|0,[Q_{xx},Q_{x*};Q_{*x},Q_{**}]) \tag{2.30}
q ( f x , f ∗ ) = N ( [ f x ; f ∗ ] ∣ 0 , [ Q x x , Q x ∗ ; Q ∗ x , Q ∗ ∗ ] ) ( 2 . 3 0 )
where Q a b = K a p K p p − 1 K p b Q_{ab}=K_{ap}K_{pp}^{-1}K_{pb} Q a b = K a p K p p − 1 K p b is the equivalent covariance function.
(??猜测这一步应该是经过了配方法求解Eq.2.25)
E ( f ∗ ) = σ − 2 K ∗ p Σ p − 1 K p x y (2.34) \mathbb{E}(\bm{f}_*) = \sigma^{-2}K_{*p}\Sigma_p^{-1}K_{px}\bm{y} \tag{2.34}
E ( f ∗ ) = σ − 2 K ∗ p Σ p − 1 K p x y ( 2 . 3 4 )
V ( f ∗ ) = K ∗ p Σ p − 1 K p ∗ (2.35) \mathbb{V}(\bm{f}_*) = K_{*p}\Sigma_p^{-1}K_{p*} \tag{2.35}
V ( f ∗ ) = K ∗ p Σ p − 1 K p ∗ ( 2 . 3 5 )
This is referred to as the subset of regressors (SoR) sparse Gaussian process
(Candela and Rasmussen, 2005), which is identical to the low rank approximation
method except that the set of inducing points are now optimised as part of the hyperparameter
set. Thus, the low rank approximation method is a sparse GP where both
the training and test conditionals are deterministic and the set of inducing points are
held fixed. (p.20)
ASSUMPTION on training and testing condition (recall Eq.2.28 and 2.29):
q ( f x ∣ f p ) = N ( K x p K p p − 1 f p , diag [ K x x − K x p K p p − 1 K p x ] ) (2.36) q(\bm{f}_x|\bm{f}_p) = \mathcal{N}(K_{xp}K_{pp}^{-1}\bm{f}_p,\text{diag}[K_{xx}-K_{xp}K_{pp}^{-1}K_{px}]) \tag{2.36}
q ( f x ∣ f p ) = N ( K x p K p p − 1 f p , diag [ K x x − K x p K p p − 1 K p x ] ) ( 2 . 3 6 )
q ( f ∗ ∣ f p ) = p ( f ∗ ∣ f p ) (2.37) q(\bm{f}_*|\bm{f}_p) = p(\bm{f}_*|\bm{f}_p) \tag{2.37}
q ( f ∗ ∣ f p ) = p ( f ∗ ∣ f p ) ( 2 . 3 7 )
p ( y ∣ f x ) ≈ q ( y ∣ f x ) = N ( f x , Λ ) (2.38) p(\bm{y}|\bm{f}_x) \approx q(\bm{y}|\bm{f}_x) = \mathcal{N}(\bm{f}_x,\Lambda) \tag{2.38}
p ( y ∣ f x ) ≈ q ( y ∣ f x ) = N ( f x , Λ ) ( 2 . 3 8 )
where Λ = diag [ K x x − Q x x ] + σ 2 I n \Lambda = \text{diag}[K_{xx}-Q_{xx}]+\sigma^2I_n Λ = diag [ K x x − Q x x ] + σ 2 I n .
2.4 Basic Function Models
20201002 重新梳理思路:
定义 BFM 模型
–> 得到 BFM 模型下 y 的 likelihood p ( y ∣ X , θ , w ) p( y | X, \theta, w) p ( y ∣ X , θ , w ) (注意这里带有 w )
–> 定义 BFM 模型中 w 的 prior p ( w ∣ α ) p( w | \alpha ) p ( w ∣ α ) (此时 GP 的 prior 已经在 full GP 定义过了)
–> 计算 evidence (marginal likelihood) p ( y ∣ X , θ ) p( y | X, \theta ) p ( y ∣ X , θ ) :把 p ( f ∣ X , θ , w ) p( f | X, \theta, w ) p ( f ∣ X , θ , w ) 中的 w w w 积分掉;把 p ( y ∣ f x , X , θ ) p( y | f_x, X, \theta) p ( y ∣ f x , X , θ ) 中的 f x f_x f x 积分掉
–> 用 Bayes’ theorem 计算 w w w 的 posterior p ( w ∣ y , X , θ , α ) = p ( y ∣ w , X , θ , α ) p ( w ∣ α ) p ( y ∣ X , θ , α ) p(w|y,X,\theta,\alpha)= \frac{p(y|w,X,\theta,\alpha) p(w|\alpha)}{p(y|X,\theta,\alpha)} p ( w ∣ y , X , θ , α ) = p ( y ∣ X , θ , α ) p ( y ∣ w , X , θ , α ) p ( w ∣ α ) (右边这几项都有了,所以可以算出来)
–> 用 Bayes’ theorem 也可以关于 w w w 计算 evidence $p( y | X, \theta ),即拆分 p ( y , w ∣ X , θ , α ) p( y, w | X, \theta, \alpha ) p ( y , w ∣ X , θ , α ) 得到 evidence
–> 得到较简单形式的 marginal likelihood function
BFM: linear combination of m m m non-linear basis functions plus additive noise
y i = ϕ ( x i ) T w + ε i y_i = \bm{\phi}(\bm{x}_i)^T\bm{w} + \varepsilon_i
y i = ϕ ( x i ) T w + ε i
ϕ ( x i ) = [ ϕ 1 ( x i ) , ⋯ , ϕ m ( x i ) ] T ∈ R m , m ≪ n \bm{\phi}(\bm{x}_i) = [\phi_1(\bm{x}_i),\cdots,\phi_m(\bm{x}_i)]^T \in \mathbb{R}^m, \quad m\ll n ϕ ( x i ) = [ ϕ 1 ( x i ) , ⋯ , ϕ m ( x i ) ] T ∈ R m , m ≪ n
ε i ∼ N ( 0 , σ 2 ) \varepsilon_i \sim \mathcal{N}(0,\sigma^2) ε i ∼ N ( 0 , σ 2 )
LIKELIHOOD: (应该是通过配方法得到的)
p ( y ∣ X , θ , w ) = ∫ p ( y ∣ f x , θ ) ⋅ p ( f x ∣ X , θ , w ) ⋅ d f x p(\bm{y}|\bm{X},\bm{\theta},\bm{w}) = \int p(\bm{y}|\bm{f}_\bm{x},\bm{\theta}) \cdot p(\bm{f}_\bm{x}|\bm{X},\bm{\theta},\bm{w}) \cdot \rm{d}\bm{f}_\bm{x}
p ( y ∣ X , θ , w ) = ∫ p ( y ∣ f x , θ ) ⋅ p ( f x ∣ X , θ , w ) ⋅ d f x
= ∫ N ( y ∣ f x , σ 2 I n ) ⋅ N ( f x ∣ Φ x T w , 0 ) ⋅ d f x = \int \mathcal{N}(\bm{y}|\bm{f}_\bm{x},\sigma^2 I_n) \cdot \mathcal{N}(\bm{f}_\bm{x}|\bm{\Phi}_\bm{x}^T\bm{w},0) \cdot \rm{d}\bm{f}_\bm{x}
= ∫ N ( y ∣ f x , σ 2 I n ) ⋅ N ( f x ∣ Φ x T w , 0 ) ⋅ d f x
= N ( y ∣ Φ x T w , σ 2 I n ) (2.47) = \mathcal{N}(\bm{y}|\bm{\Phi}_\bm{x}^T\bm{w},\sigma^2 I_n) \tag{2.47}
= N ( y ∣ Φ x T w , σ 2 I n ) ( 2 . 4 7 )
Φ x = [ ϕ 1 ( x 1 ) ⋯ ϕ 1 ( x n ) ⋮ ⋱ ⋮ ϕ m ( x 1 ) ⋯ ϕ m ( x n ) ] \bm{\Phi}_\bm{x} = \left[ \begin{matrix}
\phi_1(\bm{x}_1) & \cdots & \phi_1(\bm{x}_n) \\
\vdots & \ddots & \vdots \\
\phi_m(\bm{x}_1) & \cdots & \phi_m(\bm{x}_n) \end{matrix} \right] Φ x = ⎣ ⎢ ⎢ ⎡ ϕ 1 ( x 1 ) ⋮ ϕ m ( x 1 ) ⋯ ⋱ ⋯ ϕ 1 ( x n ) ⋮ ϕ m ( x n ) ⎦ ⎥ ⎥ ⎤
PRIOR: (assumed to be smooth)
p ( w ∣ α ) = N ( w ∣ 0 , A − 1 ) p(\bm{w}|\alpha) = \mathcal{N}(\bm{w}|0,\bm{A}^{-1})
p ( w ∣ α ) = N ( w ∣ 0 , A − 1 )
A = α I m \bm{A}=\alpha I_m A = α I m
p ( f x ∣ X , θ ) = ∫ p ( f x ∣ X , θ , w ) p ( w ∣ α ) ⋅ d w (2.51.改) p(\bm{f}_x|X,\bm{\theta}) = \int p(\bm{f}_x|X,\bm{\theta},\bm{w}) p(\bm{w}|\alpha) \cdot {\rm d}\bm{w} \tag{2.51.改}
p ( f x ∣ X , θ ) = ∫ p ( f x ∣ X , θ , w ) p ( w ∣ α ) ⋅ d w ( 2 . 5 1 . 改 )
= N ( f x ∣ 0 , Φ x T A − 1 Φ x ) (2.54.改) = \mathcal{N}(\bm{f}_x|0,\Phi_x^TA^{-1}\Phi_x) \tag{2.54.改}
= N ( f x ∣ 0 , Φ x T A − 1 Φ x ) ( 2 . 5 4 . 改 )
MARGINAL LIKELIHOOD (from Wiki : In the context of Bayesian statistics, it may also be referred to as the evidence or model evidence.):
p ( y ∣ X , θ ) = ∫ p ( y ∣ f x , θ ) p ( f x ∣ X , θ ) ⋅ d f x (2.55) p(\bm{y}|X,\bm{\theta}) = \int p(\bm{y}|\bm{f}_x,\bm{\theta})p(\bm{f}_x|X,\bm{\theta}) \cdot {\rm d}\bm{f}_x \tag{2.55}
p ( y ∣ X , θ ) = ∫ p ( y ∣ f x , θ ) p ( f x ∣ X , θ ) ⋅ d f x ( 2 . 5 5 )
= N ( y ∣ 0 , Φ x T A − 1 Φ x + σ 2 I n ) (2.56) = \mathcal{N}(\bm{y}|0,\Phi_x^TA^{-1}\Phi_x+\sigma^2I_n) \tag{2.56}
= N ( y ∣ 0 , Φ x T A − 1 Φ x + σ 2 I n ) ( 2 . 5 6 )
[This marginal likelihood function should be maximized in order to get all hyperparameters.]
[另一种推导 marginal likelihood 表达式的方法。没有看明白。感觉有循环推导的嫌疑。]
[从另一个思路来直接简化 Eq.2.56,通过代入一个特殊的 w \bm{w} w 。]
POSTERIOR distribution of w \bm{w} w : (using β = σ − 2 \beta=\sigma^{-2} β = σ − 2 )
p ( w ∣ y , X , θ ) = p ( y ∣ X , θ , w ) p ( w ∣ α ) p ( y ∣ X , θ ) = N ( w ∣ w ˉ , Σ w − 1 ) (2.57/58) p(\bm{w}|\bm{y},X,\bm{\theta}) = \frac{p(\bm{y}|X,\bm{\theta},\bm{w})p(\bm{w}|\alpha)}{p(\bm{y}|X,\bm{\theta})} = \mathcal{N}(\bm{w}|\bm{\bar{w}},\Sigma_w^{-1}) \tag{2.57/58}
p ( w ∣ y , X , θ ) = p ( y ∣ X , θ ) p ( y ∣ X , θ , w ) p ( w ∣ α ) = N ( w ∣ w ˉ , Σ w − 1 ) ( 2 . 5 7 / 5 8 )
w ˉ = β Σ w − 1 Φ x y (2.59) \bm{\bar{w}} = \beta\Sigma_w^{-1}\Phi_x\bm{y} \tag{2.59}
w ˉ = β Σ w − 1 Φ x y ( 2 . 5 9 )
Σ w = β Φ x Φ x T + A (2.60) \Sigma_w = \beta\Phi_x\Phi_x^T+A \tag{2.60}
Σ w = β Φ x Φ x T + A ( 2 . 6 0 )
MARGINAL LIKELIHOOD:
p ( y ∣ X , θ ) = p ( y ∣ X , θ , w ) p ( w ∣ α ) p ( w ∣ y , X , θ ) (2.61) p(\bm{y}|X,\bm{\theta}) = \frac{p(\bm{y}|X,\bm{\theta},\bm{w})p(\bm{w}|\alpha)}{p(\bm{w}|\bm{y},X,\bm{\theta})} \tag{2.61}
p ( y ∣ X , θ ) = p ( w ∣ y , X , θ ) p ( y ∣ X , θ , w ) p ( w ∣ α ) ( 2 . 6 1 )
= N ( y ∣ Φ x T w , β − 1 I n ) N ( w ∣ 0 , A − 1 ) N ( w ∣ w ˉ , Σ w − 1 ) (2.62) = \frac{\mathcal{N}(\bm{y}|\Phi_x^T\bm{w},\beta^{-1}I_n)\mathcal{N}(\bm{w}|0,A^{-1})}{\mathcal{N}(\bm{w}|\bm{\bar{w}},\Sigma_w^{-1})} \tag{2.62}
= N ( w ∣ w ˉ , Σ w − 1 ) N ( y ∣ Φ x T w , β − 1 I n ) N ( w ∣ 0 , A − 1 ) ( 2 . 6 2 )
Substituting w = w ˉ \bm{w} = \bm{\bar{w}} w = w ˉ [实际上是取了最大后验分布的 w \bm{w} w ,参见 bishop-pattern-2006 中 Eq.3.49 附近的的讨论。这部分是否与 w MAP \bm{w}_\text{MAP} w MAP 有关系?需要继续研究。 ] [对任意的 w \bm{w} w 成立,取特值达到化简的目的], get
p ( y ∣ X , θ ) = N ( y ∣ Φ x T w ˉ , β − 1 I n ) N ( w ˉ ∣ 0 , A − 1 ) ( 2 π ) m / 2 ∣ Σ w ∣ − 1 / 2 (2.63) p(\bm{y}|X,\bm{\theta}) = \mathcal{N}(\bm{y}|\Phi_x^T\bar{\bm{w}},\beta^{-1}I_n) \mathcal{N}(\bm{\bar{w}}|0,A^{-1})(2\pi)^{m/2}|\Sigma_w|^{-1/2} \tag{2.63}
p ( y ∣ X , θ ) = N ( y ∣ Φ x T w ˉ , β − 1 I n ) N ( w ˉ ∣ 0 , A − 1 ) ( 2 π ) m / 2 ∣ Σ w ∣ − 1 / 2 ( 2 . 6 3 )
可以直接计算 Eq.2.62 的值,而不采用代入 w ˉ \bar{\bm{w}} w ˉ 的方式:
Exponential terms of Eq.2.62 = exp [ − 1 2 [ ( y − Φ x T w ) T β I n ( y − Φ x T w ) + w T A w − ( w − w ˉ ) T Σ w ( w − w ˉ ) ] ] \text{Exponential terms of Eq.2.62} = \exp\left[ -\frac{1}{2}
\left[ (\bm{y}-\Phi_x^T\bm{w})^T\beta I_n(\bm{y}-\Phi_x^T\bm{w}) + \bm{w}^TA\bm{w} - (\bm{w}-\bar{\bm w})^T\Sigma_w(\bm{w}-\bar{\bm w}) \right] \right] Exponential terms of Eq.2.62 = exp [ − 2 1 [ ( y − Φ x T w ) T β I n ( y − Φ x T w ) + w T A w − ( w − w ˉ ) T Σ w ( w − w ˉ ) ] ]
− 2 ⋅ (inside of exp) = β y T y − 2 β w T Φ x y + β w T Φ x Φ x T w + w T A w − w T Σ w w + 2 w T Σ w w ˉ − w ˉ T Σ w w ˉ (expand) -2 \cdot \text{(inside of exp)} = \beta y^Ty - 2\beta\bm{w}^T\Phi_xy + \beta\bm{w}^T\Phi_x\Phi_x^T\bm{w}+\bm{w}^TA\bm{w} - w^T\Sigma_ww+ 2w^T\Sigma_w\bar{w} - \bar{w}^T\Sigma_w\bar{w} \tag{expand}
− 2 ⋅ (inside of exp) = β y T y − 2 β w T Φ x y + β w T Φ x Φ x T w + w T A w − w T Σ w w + 2 w T Σ w w ˉ − w ˉ T Σ w w ˉ ( e x p a n d )
= β y T y − 2 β w T Φ x y + w T ( β Φ x Φ x T + A ) w − w T Σ w w + 2 w T Σ w w ˉ − w ˉ T Σ w w ˉ (collect.cancel) = \beta y^Ty - 2\beta\bm{w}^T\Phi_xy + \bm{w}^T(\beta\Phi_x\Phi_x^T+A)\bm{w} - w^T\Sigma_ww+ 2w^T\Sigma_w\bar{w} - \bar{w}^T\Sigma_w\bar{w} \tag{collect.cancel}
= β y T y − 2 β w T Φ x y + w T ( β Φ x Φ x T + A ) w − w T Σ w w + 2 w T Σ w w ˉ − w ˉ T Σ w w ˉ ( c o l l e c t . c a n c e l )
= β y T y − 2 β w T Φ x y + 2 w T Σ w ( β Σ w − 1 Φ x y ) − w ˉ T Σ w w ˉ (collect.cancel) = \beta y^Ty - 2\beta\bm{w}^T\Phi_xy + 2w^T\Sigma_w(\beta\Sigma_w^{-1}\Phi_x\bm{y}) - \bar{w}^T\Sigma_w\bar{w} \tag{collect.cancel}
= β y T y − 2 β w T Φ x y + 2 w T Σ w ( β Σ w − 1 Φ x y ) − w ˉ T Σ w w ˉ ( c o l l e c t . c a n c e l )
= β y T y − w ˉ T ( β Φ x Φ x T + A ) w ˉ = \beta y^Ty - \bar{w}^T(\beta\Phi_x\Phi_x^T+A)\bar{w}
= β y T y − w ˉ T ( β Φ x Φ x T + A ) w ˉ
= ( y − Φ x T w ˉ ) T β I n ( y − Φ x T w ˉ ) + 2 w ˉ T Φ x β y − 2 w ˉ T Φ x Φ x T w ˉ − w ˉ T A w ˉ (complete.square) = (\bm{y}-\Phi_x^T\bm{\bar{w}})^T\beta I_n(\bm{y}-\Phi_x^T\bm{\bar{w}}) + 2\bar{w}^T\Phi_x\beta\bm{y} - 2\bar{w}^T\Phi_x\Phi_x^T\bar{w} - \bar{w}^TA\bar{w} \tag{complete.square}
= ( y − Φ x T w ˉ ) T β I n ( y − Φ x T w ˉ ) + 2 w ˉ T Φ x β y − 2 w ˉ T Φ x Φ x T w ˉ − w ˉ T A w ˉ ( c o m p l e t e . s q u a r e )
= ( y − Φ x T w ˉ ) T β I n ( y − Φ x T w ˉ ) + 2 w ˉ T Φ x β y − 2 w ˉ T Σ w β Σ w − 1 Φ x y + w ˉ T A w ˉ (substitute.cancel) = (\bm{y}-\Phi_x^T\bm{\bar{w}})^T\beta I_n(\bm{y}-\Phi_x^T\bm{\bar{w}}) + 2\bar{w}^T\Phi_x\beta\bm{y} - 2\bar{w}^T\Sigma_w\beta\Sigma_w^{-1}\Phi_x\bm{y} + \bar{w}^TA\bar{w} \tag{substitute.cancel}
= ( y − Φ x T w ˉ ) T β I n ( y − Φ x T w ˉ ) + 2 w ˉ T Φ x β y − 2 w ˉ T Σ w β Σ w − 1 Φ x y + w ˉ T A w ˉ ( s u b s t i t u t e . c a n c e l )
= ( y − Φ x T w ˉ ) T β I n ( y − Φ x T w ˉ ) + w ˉ T A w ˉ (another.square) = (\bm{y}-\Phi_x^T\bm{\bar{w}})^T\beta I_n(\bm{y}-\Phi_x^T\bm{\bar{w}}) + \bar{w}^TA\bar{w} \tag{another.square}
= ( y − Φ x T w ˉ ) T β I n ( y − Φ x T w ˉ ) + w ˉ T A w ˉ ( a n o t h e r . s q u a r e )
[问题:直接代入 w ˉ \bar{w} w ˉ 得到了同样的结果,与直接计算的结果相同。直接代入的数学依据是什么?待查资料。 与超哥讨论得知 :实际是因为 Eq.2.62 对任意 w \bm{w} w 成立,取特值代入,达到化简的目的。]
LOG MARGINAL LIKELIHOOD:
ln p ( y ∣ X , θ ) = − β 2 ∥ Φ x T w ˉ − y ∥ 2 + n 2 ln β − n 2 ln 2 π − α 2 ∥ w ˉ ∥ + m 2 ln α − 1 2 ln ∣ Σ w ∣ (2.64) \ln p(\bm{y}|X,\bm{\theta}) = -\frac{\beta}{2}\|\Phi_x^T\bm{\bar{w}}-\bm{y}\|^2 + \frac{n}{2}\ln\beta -\frac{n}{2}\ln2\pi - \frac{\alpha}{2}\|\bm{\bar{w}}\| + \frac{m}{2}\ln\alpha - \frac{1}{2}\ln|\Sigma_w| \tag{2.64}
ln p ( y ∣ X , θ ) = − 2 β ∥ Φ x T w ˉ − y ∥ 2 + 2 n ln β − 2 n ln 2 π − 2 α ∥ w ˉ ∥ + 2 m ln α − 2 1 ln ∣ Σ w ∣ ( 2 . 6 4 )
[我的更正:in Eq.2.64: Φ x w ˉ → Φ x T w ˉ \Phi_x\bar{\bm{w}}\rightarrow\Phi_x^T\bar{\bm{w}} Φ x w ˉ → Φ x T w ˉ ]
验证 Eq.2.64 和 ln(Eq.2.56) 等价
r.h.s of Eq.2.64:
− 2 ⋅ t e r m 1 − 2 ⋅ t e r m 4 = β ( Φ x T w ˉ − y ) T ( Φ x T w ˉ − y ) + α w ˉ T w ˉ -2\cdot term1 -2\cdot term4 = \beta (\Phi_x^T\bar{w}-\bm{y})^T(\Phi_x^T\bar{w}-\bm{y}) + \alpha \bar{w}^T\bar{w}
− 2 ⋅ t e r m 1 − 2 ⋅ t e r m 4 = β ( Φ x T w ˉ − y ) T ( Φ x T w ˉ − y ) + α w ˉ T w ˉ
= β y T y + β ( w ˉ T Φ x Φ x T w ˉ − 2 w ˉ T Φ x y ) + α w ˉ T w ˉ (expand) = \beta\bm{y}^T\bm{y} + \beta( \bar{w}^T\Phi_x\Phi_x^T\bar{w} - 2 \bar{w}^T\Phi_x\bm{y} ) + \alpha\bar{w}^T\bar{w} \tag{expand}
= β y T y + β ( w ˉ T Φ x Φ x T w ˉ − 2 w ˉ T Φ x y ) + α w ˉ T w ˉ ( e x p a n d )
= β y T y + β y T [ β 2 Φ x T Σ w − T Φ x Φ x T Σ w − 1 Φ x ] y − 2 β y T [ β Φ x T Σ w − T Φ x ] y + α y T [ β 2 Φ x T Σ w − T Σ w − 1 Φ x ] y (expand) = \beta\bm{y}^T\bm{y} + \beta y^T[\beta^2\Phi_x^T\Sigma_w^{-T}\Phi_x\Phi_x^T\Sigma_w^{-1}\Phi_x]y - 2\beta y^T[\beta\Phi_x^T\Sigma_w^{-T}\Phi_x]y+ \alpha y^T[\beta^2\Phi_x^T\Sigma_w^{-T}\Sigma_w^{-1}\Phi_x]y \tag{expand}
= β y T y + β y T [ β 2 Φ x T Σ w − T Φ x Φ x T Σ w − 1 Φ x ] y − 2 β y T [ β Φ x T Σ w − T Φ x ] y + α y T [ β 2 Φ x T Σ w − T Σ w − 1 Φ x ] y ( e x p a n d )
= β y T y + β 2 y T [ β Φ x T Σ w − T Φ x Φ x T Σ − 1 Φ x − 2 Φ x T Σ w − T Φ x + α Φ x T Σ w − T Σ − 1 Φ x ] y (collect) = \beta\bm{y}^T\bm{y} + \beta^2\bm{y}^T [\beta\Phi_x^T\Sigma_w^{-T}\Phi_x\Phi_x^T\Sigma^{-1}\Phi_x-2\Phi_x^T\Sigma_w^{-T}\Phi_x+\alpha\Phi_x^T\Sigma_w^{-T}\Sigma^{-1}\Phi_x] \bm{y} \tag{collect}
= β y T y + β 2 y T [ β Φ x T Σ w − T Φ x Φ x T Σ − 1 Φ x − 2 Φ x T Σ w − T Φ x + α Φ x T Σ w − T Σ − 1 Φ x ] y ( c o l l e c t )
= β y T y + β 2 y T [ Φ x T Σ w − T ( β Φ x Φ x T + α I n ) Σ − 1 Φ x − 2 Φ x T Σ w − T Φ x ] y (collect.inside) = \beta\bm{y}^T\bm{y} + \beta^2\bm{y}^T \left[ \Phi_x^T\Sigma_w^{-T}( \beta\Phi_x\Phi_x^T+\alpha I_n )\Sigma^{-1}\Phi_x -2\Phi_x^T\Sigma_w^{-T}\Phi_x \right] \bm{y} \tag{collect.inside}
= β y T y + β 2 y T [ Φ x T Σ w − T ( β Φ x Φ x T + α I n ) Σ − 1 Φ x − 2 Φ x T Σ w − T Φ x ] y ( c o l l e c t . i n s i d e )
= β y T y + β 2 y T [ Φ x T Σ w − T Σ Σ − 1 Φ x − 2 Φ x T Σ w − T Φ x ] y (cancel.inverse) = \beta\bm{y}^T\bm{y} + \beta^2\bm{y}^T \left[ \Phi_x^T\Sigma_w^{-T}\Sigma\Sigma^{-1}\Phi_x -2\Phi_x^T\Sigma_w^{-T}\Phi_x \right] \bm{y} \tag{cancel.inverse}
= β y T y + β 2 y T [ Φ x T Σ w − T Σ Σ − 1 Φ x − 2 Φ x T Σ w − T Φ x ] y ( c a n c e l . i n v e r s e )
= β y T y − β 2 y T ( Φ x T Σ w − 1 Φ x ) y (proof) = \beta\bm{y}^T\bm{y} - \beta^2\bm{y}^T ( \Phi_x^T\Sigma_w^{-1}\Phi_x ) \bm{y} \tag{proof}
= β y T y − β 2 y T ( Φ x T Σ w − 1 Φ x ) y ( p r o o f )
ln of Eq.2.56
p ( y ∣ X , θ ) = 1 2 π n / 2 1 ∣ Φ x T A − 1 Φ x + σ 2 I n ∣ 1 / 2 exp [ − 1 2 y T ( Φ x T A − 1 Φ x + σ 2 I n ) − 1 y ] (2.56.expansion) p(\bm{y}|X,\bm{\theta})=\frac{1}{2\pi^{n/2}} \frac{1}{|\Phi_x^TA^{-1}\Phi_x+\sigma^2I_n|^{1/2}} \exp \left[ -\frac{1}{2} \bm{y}^T (\Phi_x^TA^{-1}\Phi_x+\sigma^2I_n)^{-1}\bm{y} \right] \tag{2.56.expansion}
p ( y ∣ X , θ ) = 2 π n / 2 1 ∣ Φ x T A − 1 Φ x + σ 2 I n ∣ 1 / 2 1 exp [ − 2 1 y T ( Φ x T A − 1 Φ x + σ 2 I n ) − 1 y ] ( 2 . 5 6 . e x p a n s i o n )
ln p ( y ∣ X , θ ) = − n 2 ln 2 π − 1 2 ln ∣ Φ x T A − 1 Φ + σ 2 I n ∣ − 1 2 y T ( Φ x T A − 1 Φ x + σ 2 I n ) − 1 y (2.56.log) \ln p(\bm{y}|X,\bm{\theta}) = -\frac{n}{2}\ln2\pi - \frac{1}{2}\ln|\Phi_x^{T}A^{-1}\Phi+\sigma^2I_n| -\frac{1}{2} \bm{y}^T (\Phi_x^TA^{-1}\Phi_x+\sigma^2I_n)^{-1}\bm{y} \tag{2.56.log}
ln p ( y ∣ X , θ ) = − 2 n ln 2 π − 2 1 ln ∣ Φ x T A − 1 Φ + σ 2 I n ∣ − 2 1 y T ( Φ x T A − 1 Φ x + σ 2 I n ) − 1 y ( 2 . 5 6 . l o g )
r.h.s of Eq.2.56.log
− 2 ⋅ t e r m 2 = ln ∣ σ 2 I n ∣ + ln ∣ A − 1 ∣ + ln ∣ A + Φ x σ − 2 I n Φ x T ∣ -2\cdot term2 = \ln|\sigma^2I_n| + \ln|A^{-1}| +\ln |A+\Phi_x\sigma^{-2}I_n\Phi_x^T|
− 2 ⋅ t e r m 2 = ln ∣ σ 2 I n ∣ + ln ∣ A − 1 ∣ + ln ∣ A + Φ x σ − 2 I n Φ x T ∣
= − n ln β − m ln α + ln ∣ Σ w ∣ = -n\ln\beta - m\ln\alpha + \ln|\Sigma_w|
= − n ln β − m ln α + ln ∣ Σ w ∣
− 2 ⋅ t e r m 3 = y T ( σ − 2 I n − σ − 2 I n Φ x T ( A + Φ x σ − 2 I n Φ x T ) − 1 Φ x σ − 2 I n ) y -2\cdot term3 = \bm{y}^T (\sigma^{-2}I_n - \sigma^{-2}I_n\Phi_x^T(A+\Phi_x\sigma^{-2}I_n\Phi_x^T)^{-1}\Phi_x\sigma^{-2}I_n) \bm{y}
− 2 ⋅ t e r m 3 = y T ( σ − 2 I n − σ − 2 I n Φ x T ( A + Φ x σ − 2 I n Φ x T ) − 1 Φ x σ − 2 I n ) y
= y T ( β I n − β 2 Φ x T ( A + β Φ x Φ x T ) − 1 Φ x ) y = \bm{y}^T (\beta I_n - \beta^2 \Phi_x^T(A+\beta\Phi_x\Phi_x^T)^{-1}\Phi_x) \bm{y}
= y T ( β I n − β 2 Φ x T ( A + β Φ x Φ x T ) − 1 Φ x ) y
= β y T y − β 2 y T Φ x T Σ w − 1 Φ x y (same.to.proof) = \beta\bm{y}^T\bm{y} - \beta^2\bm{y}^T\Phi_x^T\Sigma_w^{-1}\Phi_x\bm{y} \tag{same.to.proof}
= β y T y − β 2 y T Φ x T Σ w − 1 Φ x y ( s a m e . t o . p r o o f )
Other terms can be easily matched. So, they are equivalent.
PREDICTIVE DISTRIBUTION:
p ( y ∗ ∣ X ∗ , X , y , θ ) = N ( y ∗ ∣ E ( y ∗ ) , V ( y ∗ ) ) (2.70) p(\bm{y}_*|X_*,X,\bm{y},\bm{\theta}) = \mathcal{N}(\bm{y}_*|\mathbb{E}(\bm{y}_*),\mathbb{V}(\bm{y}_*)) \tag{2.70}
p ( y ∗ ∣ X ∗ , X , y , θ ) = N ( y ∗ ∣ E ( y ∗ ) , V ( y ∗ ) ) ( 2 . 7 0 )
E ( y ∗ ) = Φ ∗ w ˉ (2.71) \mathbb{E}(\bm{y}_*) = \Phi_* \bar{\bm{w}} \tag{2.71}
E ( y ∗ ) = Φ ∗ w ˉ ( 2 . 7 1 )
V ( y ∗ ) = Φ ∗ T Σ w − 1 Φ ∗ + σ 2 I n (2.72) \mathbb{V}(\bm{y}_*) = \Phi_*^T\Sigma_w^{-1}\Phi_* + \sigma^2 I_n \tag{2.72}
V ( y ∗ ) = Φ ∗ T Σ w − 1 Φ ∗ + σ 2 I n ( 2 . 7 2 )
Equivalent to SoR with a different prior on the latent variables: (这部分不是太理解)
q ( f x ∣ f p , X , θ ) = N ( f x ∣ Φ x A − 1 f p , 0 ) (2.73) q(\bm{f}_x|\bm{f}_p,X,\bm{\theta}) = \mathcal{N}(\bm{f}_x|\Phi_xA^{-1}\bm{f}_p,0) \tag{2.73}
q ( f x ∣ f p , X , θ ) = N ( f x ∣ Φ x A − 1 f p , 0 ) ( 2 . 7 3 )
q ( f ∗ ∣ f p , X ∗ , θ ) = N ( f ∗ ∣ Φ ∗ A − 1 f p , 0 ) (2.74) q(\bm{f}_*|\bm{f}_p,X_*,\bm{\theta}) = \mathcal{N}(\bm{f}_*|\Phi_*A^{-1}\bm{f}_p,0) \tag{2.74}
q ( f ∗ ∣ f p , X ∗ , θ ) = N ( f ∗ ∣ Φ ∗ A − 1 f p , 0 ) ( 2 . 7 4 )
q ( f p ∣ θ ) = N ( f p ∣ 0 , A ) (2.75) q(\bm{f}_p|\bm{\theta}) = \mathcal{N}(\bm{f}_p|0,A) \tag{2.75}
q ( f p ∣ θ ) = N ( f p ∣ 0 , A ) ( 2 . 7 5 )
Similarly, SoR is a BFM with a prior on weights: (这部分不是太理解)
p ( w ∣ θ ) = N ( w ∣ 0 , K p p − 1 ) p(\bm{w}|\bm{\theta}) = \mathcal{N}(\bm{w}|0,K_{pp}^{-1})
p ( w ∣ θ ) = N ( w ∣ 0 , K p p − 1 )
2.5 Relations to other Methods (暂时略过)
2.5.1 ANN
2.5.2 Relations to Stacked Auto-encoders
3 Extending Sparse Gaussian Process
The remaining sections, unifying ARD with the other features as a single sparse GP framework, are new contributions by this thesis. (p.31)
3.1 Automatic Relevance Determination
PRECISION matrix: (a precision parameter per weight)
A = diag [ α ] , α = [ α 1 , … , α m ] T A = \text{diag}[\bm{\alpha}],\quad \bm{\alpha}=[\alpha_1,\dots,\alpha_m]^T
A = diag [ α ] , α = [ α 1 , … , α m ] T
New LOG MARGINAL LIKELIHOOD function: (the old one is Eq.2.64; replace all α I n \alpha I_n α I n with A A A )
ln p ( y ∣ X , θ ) = − β 2 ∥ Φ x T w ˉ − y ∥ 2 + n 2 ln β − n 2 ln 2 π − 1 2 w ˉ T A w ˉ + 1 2 ln ∣ A ∣ − 1 2 ln ∣ Σ w ∣ (3.1) \ln p(\bm{y}|X,\bm{\theta}) = -\frac{\beta}{2}\|\Phi_x^T\bm{\bar{w}}-\bm{y}\|^2 + \frac{n}{2}\ln\beta -\frac{n}{2}\ln2\pi - \frac{1}{2}\bm{\bar{w}}^TA\bm{\bar{w}} + \frac{1}{2}\ln|A| - \frac{1}{2}\ln|\Sigma_w| \tag{3.1}
ln p ( y ∣ X , θ ) = − 2 β ∥ Φ x T w ˉ − y ∥ 2 + 2 n ln β − 2 n ln 2 π − 2 1 w ˉ T A w ˉ + 2 1 ln ∣ A ∣ − 2 1 ln ∣ Σ w ∣ ( 3 . 1 )
ln p ( y ∣ X , θ ) = − β 2 ∥ Φ x T w ˉ − y ∥ 2 + n 2 ln β − n 2 ln 2 π − α 2 ∥ w ˉ ∥ + m 2 ln α − 1 2 ln ∣ Σ w ∣ (2.64.repeat.for.compare) \ln p(\bm{y}|X,\bm{\theta}) = -\frac{\beta}{2}\|\Phi_x^T\bm{\bar{w}}-\bm{y}\|^2 + \frac{n}{2}\ln\beta -\frac{n}{2}\ln2\pi - \frac{\alpha}{2}\|\bm{\bar{w}}\| + \frac{m}{2}\ln\alpha - \frac{1}{2}\ln|\Sigma_w| \tag{2.64.repeat.for.compare}
ln p ( y ∣ X , θ ) = − 2 β ∥ Φ x T w ˉ − y ∥ 2 + 2 n ln β − 2 n ln 2 π − 2 α ∥ w ˉ ∥ + 2 m ln α − 2 1 ln ∣ Σ w ∣ ( 2 . 6 4 . r e p e a t . f o r . c o m p a r e )
(关于 RVM 的讨论。更多介绍参考 Tipping 的文章或者 Bishop-pattern-2006 ) A similar approach was proposed by Tipping (2001), coined as the relevance vector machine (RVM), where the set of basis function locations P was set equal to the locations of the training samples X and held fixed. Only the precision parameter and the values are optimised to determine the relevant set of vectors from the training set. This approach is still computationally expensive and the work discussed in Tipping (2001) proposed an iterative workaround to add and remove vectors incrementally.
3.2 Heteroscedastic Noise
V ( y ∗ ) = Φ ∗ T Σ w − 1 Φ ∗ + σ 2 I n (2.72.repeat) \mathbb{V}(\bm{y}_*) = \Phi_*^T\Sigma_w^{-1}\Phi_* + \sigma^2 I_n \tag{2.72.repeat}
V ( y ∗ ) = Φ ∗ T Σ w − 1 Φ ∗ + σ 2 I n ( 2 . 7 2 . r e p e a t )
The predictive variance in Equation (2.72) has two components:
The first term, V ( f ∗ ) = Φ ∗ T Σ w − 1 Φ ∗ \mathbb{V}(\bm{f}_*)=\Phi_*^T\Sigma_w^{-1}\Phi_* V ( f ∗ ) = Φ ∗ T Σ w − 1 Φ ∗ , is the model variance.
The model variance depends on the data density of the training sample at x ∗ \bm{x}_* x ∗ . Theoretically, this component of the model variance will go to zero as the size of the data set increases.
This term hence models our underlying uncertainty about the mean function . The model becomes very condent about the posterior mean when presented with a large number of samples at x ∗ \bm{x}_* x ∗ , at which point the predictive variance reduces to the intrinsic noise variance.
The second term, σ 2 I n \sigma^2I_n σ 2 I n , is the noise uncertainty.
At this point, it is assumed to be white Gaussian noise with a fixed precision β = σ − 2 \beta = \sigma^{-2} β = σ − 2 .
model the function as a linear combination of basis functions: (choose the exponential form to ensure positivity)
β ( x ) = exp [ ϕ T ( x ) v + b ] (p.33) \beta(\bm{x}) = \exp \left[ \bm{\phi}^T(\bm{x})\bm{v} + b \right] \tag{p.33}
β ( x ) = exp [ ϕ T ( x ) v + b ] ( p . 3 3 )
New LIKELIHOOD function: (the old one is Eq.2.47)
p ( y ∣ X , θ , w ) = N ( y ∣ Φ x w , B − 1 ) (3.3) p(\bm{y}|X,\bm{\theta},\bm{w}) = \mathcal{N}(\bm{y}|\Phi_x\bm{w},B^{-1}) \tag{3.3}
p ( y ∣ X , θ , w ) = N ( y ∣ Φ x w , B − 1 ) ( 3 . 3 )
p ( y ∣ X , θ , w ) = N ( y ∣ Φ x T w , σ 2 I n ) , σ 2 I n = β − 1 I n (2.47.repeat.for.compare) p(\bm{y}|X,\bm{\theta},\bm{w}) = \mathcal{N}(\bm{y}|\bm{\Phi}_\bm{x}^T\bm{w},\sigma^2 I_n),\quad\sigma^2I_n = \beta^{-1}I_n \tag{2.47.repeat.for.compare}
p ( y ∣ X , θ , w ) = N ( y ∣ Φ x T w , σ 2 I n ) , σ 2 I n = β − 1 I n ( 2 . 4 7 . r e p e a t . f o r . c o m p a r e )
where B = diag [ β 1 , … , β n ] B=\text{diag}[\beta_1, \dots, \beta_n] B = diag [ β 1 , … , β n ] is the precision matrix.
New POSTERIOR distribution of w \bm{w} w : (the old ones are Eq.2.58–60; replace β I n \beta I_n β I n with B B B .)
p ( w ∣ y , X , θ ) = N ( w ∣ w ˉ , Σ w − 1 ) (3.4) p(\bm{w}|\bm{y},X,\bm{\theta}) = \mathcal{N}(\bm{w}|\bm{\bar{w}},\Sigma_w^{-1}) \tag{3.4}
p ( w ∣ y , X , θ ) = N ( w ∣ w ˉ , Σ w − 1 ) ( 3 . 4 )
w ˉ = Σ w − 1 Φ x B y (3.5) \bm{\bar{w}} = \Sigma_w^{-1}\Phi_xB\bm{y} \tag{3.5}
w ˉ = Σ w − 1 Φ x B y ( 3 . 5 )
Σ w = Φ x B Φ x T + A (3.6) \Sigma_w = \Phi_x B\Phi_x^T + A \tag{3.6}
Σ w = Φ x B Φ x T + A ( 3 . 6 )
New LOG MARGINAL POSTERIOR distribution of w \bm{w} w : (replace β I n \beta I_n β I n with B B B )
ln p ( y ∣ X , θ ) = − 1 2 δ T B δ + 1 2 ln ∣ B ∣ − n 2 ln 2 π − 1 2 w ˉ T A w ˉ + 1 2 ln ∣ A ∣ − 1 2 ln ∣ Σ w ∣ (3.7) \ln p(\bm{y}|X,\bm{\theta}) = -\frac{1}{2}\bm{\delta}^TB\bm{\delta} + \frac{1}{2}\ln|B| -\frac{n}{2}\ln2\pi - \frac{1}{2}\bm{\bar{w}}^TA\bm{\bar{w}} + \frac{1}{2}\ln|A| - \frac{1}{2}\ln|\Sigma_w| \tag{3.7}
ln p ( y ∣ X , θ ) = − 2 1 δ T B δ + 2 1 ln ∣ B ∣ − 2 n ln 2 π − 2 1 w ˉ T A w ˉ + 2 1 ln ∣ A ∣ − 2 1 ln ∣ Σ w ∣ ( 3 . 7 )
ln p ( y ∣ X , θ ) = − β 2 ∥ Φ x T w ˉ − y ∥ 2 + n 2 ln β − n 2 ln 2 π − 1 2 w ˉ T A w ˉ + 1 2 ln ∣ A ∣ − 1 2 ln ∣ Σ w ∣ (3.1.repeat.for.compare) \ln p(\bm{y}|X,\bm{\theta}) = -\frac{\beta}{2}\|\Phi_x^T\bm{\bar{w}}-\bm{y}\|^2 + \frac{n}{2}\ln\beta -\frac{n}{2}\ln2\pi - \frac{1}{2}\bm{\bar{w}}^TA\bm{\bar{w}} + \frac{1}{2}\ln|A| - \frac{1}{2}\ln|\Sigma_w| \tag{3.1.repeat.for.compare}
ln p ( y ∣ X , θ ) = − 2 β ∥ Φ x T w ˉ − y ∥ 2 + 2 n ln β − 2 n ln 2 π − 2 1 w ˉ T A w ˉ + 2 1 ln ∣ A ∣ − 2 1 ln ∣ Σ w ∣ ( 3 . 1 . r e p e a t . f o r . c o m p a r e )
ln p ( y ∣ X , θ ) = − β 2 ∥ Φ x T w ˉ − y ∥ 2 + n 2 ln β − n 2 ln 2 π − α 2 ∥ w ˉ ∥ + m 2 ln α − 1 2 ln ∣ Σ w ∣ (2.64.repeat.for.compare) \ln p(\bm{y}|X,\bm{\theta}) = -\frac{\beta}{2}\|\Phi_x^T\bm{\bar{w}}-\bm{y}\|^2 + \frac{n}{2}\ln\beta -\frac{n}{2}\ln2\pi - \frac{\alpha}{2}\|\bm{\bar{w}}\| + \frac{m}{2}\ln\alpha - \frac{1}{2}\ln|\Sigma_w| \tag{2.64.repeat.for.compare}
ln p ( y ∣ X , θ ) = − 2 β ∥ Φ x T w ˉ − y ∥ 2 + 2 n ln β − 2 n ln 2 π − 2 α ∥ w ˉ ∥ + 2 m ln α − 2 1 ln ∣ Σ w ∣ ( 2 . 6 4 . r e p e a t . f o r . c o m p a r e )
where δ = y − Φ x T w ˉ \delta = \bm{y} - \Phi_x^T\bm{\bar{w}} δ = y − Φ x T w ˉ .
A prior on the weights v \bm{v} v in Eq.p.33 to favour the simplest precision function β i \beta_i β i , namely that v \bm{v} v is normally distributed with a mean of 0 and a diagonal precision matrix T = diag [ τ 1 , … , τ m ] T = \text{diag}[\tau_1, \dots, \tau_m] T = diag [ τ 1 , … , τ m ] :
p ( v ∣ τ ) = N ( v ∣ 0 , T ) (p.35) p(\bm{v}|\bm{\tau}) = \mathcal{N}(\bm{v}|0,T) \tag{p.35}
p ( v ∣ τ ) = N ( v ∣ 0 , T ) ( p . 3 5 )
Since
ln p ( y ∣ X , θ ) = ln ( p ( y ∣ X , θ , ν ) p ( ν ∣ τ ) ) \ln p(\bm{y}|X,\bm{\theta}) = \ln \left( p(\bm{y}|X,\bm{\theta},\bm{\nu})p(\bm{\nu}|\bm{\tau}) \right)
ln p ( y ∣ X , θ ) = ln ( p ( y ∣ X , θ , ν ) p ( ν ∣ τ ) )
the final new log marginal likelihood function is:
L ( D , θ ) = ln p ( y ∣ X , θ ) = − 1 2 δ T B δ + 1 2 ln ∣ B ∣ − n 2 ln 2 π \mathcal{L}(\mathcal{D},\bm{\theta}) = \ln p(\bm{y}|X,\bm{\theta}) = -\frac{1}{2}\bm{\delta}^TB\bm{\delta} + \frac{1}{2}\ln|B| -\frac{n}{2}\ln2\pi
L ( D , θ ) = ln p ( y ∣ X , θ ) = − 2 1 δ T B δ + 2 1 ln ∣ B ∣ − 2 n ln 2 π
− 1 2 w ˉ T A w ˉ + 1 2 ln ∣ A ∣ − 1 2 ln ∣ Σ w ∣ - \frac{1}{2}\bm{\bar{w}}^TA\bm{\bar{w}} + \frac{1}{2}\ln|A| - \frac{1}{2}\ln|\Sigma_w|
− 2 1 w ˉ T A w ˉ + 2 1 ln ∣ A ∣ − 2 1 ln ∣ Σ w ∣
− 1 2 v T T v + 1 2 ln ∣ T ∣ − m 2 ln 2 π (3.9) -\frac{1}{2}\bm{v}^TT\bm{v} + \frac{1}{2}\ln|T| - \frac{m}{2}\ln2\pi \tag{3.9}
− 2 1 v T T v + 2 1 ln ∣ T ∣ − 2 m ln 2 π ( 3 . 9 )
where τ \bm{\tau} τ acts as an automatic relevance determination cost for the noise process , allowing the objective to dynamically select different sets of relevant basis functions for both the posterior mean and variance estimation.
All hyperparameters are
θ = [ φ , α , ν , b ] \bm{\theta} = \left[ \bm{\varphi}, \bm{\alpha}, \bm{\nu}, b \right]
θ = [ φ , α , ν , b ]
New predictive distribution of one sample x ∗ \bm{x}_* x ∗ ,
p ( y ∗ ∣ x ∗ , D , θ ) = N ( y ∗ ∣ E ( y ∗ ) , V ( y ∗ ) ) (3.10) p(y_*|\bm{x}_*,\mathcal{D},\bm{\theta}) = \mathcal{N}(y_*|\mathbb{E}(y_*),\mathbb{V}(y_*)) \tag{3.10}
p ( y ∗ ∣ x ∗ , D , θ ) = N ( y ∗ ∣ E ( y ∗ ) , V ( y ∗ ) ) ( 3 . 1 0 )
E ( y ∗ ) = ϕ ( x ∗ ) T w ˉ , w ˉ is obtained through training (3.11) \mathbb{E}(y_*) = \bm{\phi}(x_*)^T \bm{\bar{w}}, \quad\bm{\bar{w}}\text{ is obtained through training} \tag{3.11}
E ( y ∗ ) = ϕ ( x ∗ ) T w ˉ , w ˉ is obtained through training ( 3 . 1 1 )
V ( y ∗ ) = ϕ ( x ∗ ) T Σ w − 1 ϕ ( x ∗ ) + β − 1 ( x ∗ ) , β is obtained by Eq.p.33 (3.12) \mathbb{V}(y_*) = \bm{\phi}(x_*)^T \Sigma_w^{-1} \bm{\phi}(x_*) + \beta^{-1}(\bm{x}_*), \quad\beta\text{ is obtained by Eq.p.33} \tag{3.12}
V ( y ∗ ) = ϕ ( x ∗ ) T Σ w − 1 ϕ ( x ∗ ) + β − 1 ( x ∗ ) , β is obtained by Eq.p.33 ( 3 . 1 2 )
3.3 Strictly Positive Outputs (未看)
3.4 Cost-sensitive Learning (未看)
3.5 Prior Mean Function
L ( D , θ ^ ) = − 1 2 δ ^ T Ω B δ ^ + 1 2 t r ( Ω ⊙ ln B ) − 1 2 t r ln ( Ω ⊙ ln 2 π ) \mathcal{L}(\mathcal{D},\hat{\bm{\theta}}) = -\frac{1}{2} \hat{\bm{\delta}}^T\Omega B\hat{\bm{\delta}} + \frac{1}{2} {\rm tr}(\Omega\odot\ln B) - \frac{1}{2}{\rm tr}\ln(\Omega\odot\ln2\pi)
L ( D , θ ^ ) = − 2 1 δ ^ T Ω B δ ^ + 2 1 t r ( Ω ⊙ ln B ) − 2 1 t r ln ( Ω ⊙ ln 2 π )
= 1 2 ( w ˉ ^ − a ^ ) T A ^ ( w ˉ ^ − a ^ ) + 1 2 ln ∣ A ^ ∣ − 1 2 ln ∣ Σ w ^ ∣ = \frac{1}{2} (\hat{\bar{\bm{w}}}-\hat{\bm{a}})^T\hat{A}(\hat{\bar{\bm{w}}}-\hat{\bm{a}}) + \frac{1}{2}\ln|\hat{A}| - \frac{1}{2}\ln|\hat{\Sigma_w}|
= 2 1 ( w ˉ ^ − a ^ ) T A ^ ( w ˉ ^ − a ^ ) + 2 1 ln ∣ A ^ ∣ − 2 1 ln ∣ Σ w ^ ∣
= 1 2 v T T v + 1 2 ln ∣ T ∣ − m 2 ln 2 π (3.36) = \frac{1}{2}\bm{v}^TT\bm{v} + \frac{1}{2}\ln|T| - \frac{m}{2}\ln2\pi \tag{3.36}
= 2 1 v T T v + 2 1 ln ∣ T ∣ − 2 m ln 2 π ( 3 . 3 6 )
4 Optimisation
这章主要给出了训练时需要用到的梯度公式。难点在于矩阵微分的推导和形式。
4.5 Summary
讲了计算 Sigmoid 和 RBF 的梯度的解析计算方法(尚未掌握)。
4.1 Gradient-based Optimisation
4.2 The General Case
Using the properties of matrix derivatives (Petersen and Pedersen, 2012)
(学这本书中的矩阵微分性质)
4.3 The Sigmoidal Function
4.4 Radial Basis Functions
TIME COMPLEXITY:
Model | Time complexity
ANN(l-layers) | O(nmd+(l-1)(nm^2))
FITC | O ( n m d + n m 2 ) O(nmd+nm^2) O ( n m d + n m 2 )
GPVL | O ( n m d + n m 2 ) O(nmd+nm^2) O ( n m d + n m 2 )
GPVD | O ( n m d + n m 2 ) O(nmd+nm^2) O ( n m d + n m 2 )
GPVC | O ( n m d 2 + n m 2 ) O(nmd^2+nm^2) O ( n m d 2 + n m 2 )
Full GP | O ( n 3 ) O(n^3) O ( n 3 )
where d d d is the dimension of input x \bm{x} x , n n n is the number of training data, m m m is the number of basis functions.
5 Noisy or Missing Variables
这章很重要,讲了如果 x x x 有噪音的情况下,如何进行训练和预测。
推导很复杂,繁琐,目前似乎应用不到,暂时不看了。
这章的很多推导都没有详细过程
代码中的 Ψ \Psi Ψ 出现在这一节,考虑输入变量 x \bm{x} x 本身存在不确定性时的推导。
(p.63) We will first address how to predict with known input noise in Section 5.1.1, since this can be addressed independently of the training data.
(p.64) In Section 5.2.1, we consider the case in which training data inputs are uncertain by approximating the expected value of the marginal likelihood.
先研究 prediction,因为 prediction 简单,不依赖训练数据。问题:
是不是要先知道输入的分布才行?
为什么和 training 是分开的?
(p.70) It requires us to have a probabilistic model for each possible set of missing variables, which grows exponentially large as the dimensionality of the input increases.
6 Photometric Redshift Estimation
(p.126) The former, the model uncertainty, helps in identifying regions where more data is needed, whereas the latter, the intrinsic noise uncertainty, helps in identifying regions where better or more precise features are needed.
(p.133) Finally, we introduced the capability of handling missing photometry, not present in any other method apart from random forests. This is particularly useful when using a model trained on one survey to predict the photometric redshift on another survey that does not share the same photometry but overlaps.
1 【这篇 thesis 暂时看完,暂告一段落,开始继续研究 GPz 的代码。】