在深度学习中,我们通常对模型进行抽样并计算与真实样本之间的损失,来估计模型分布与真实分布之间的差异。并且损失可以定义得很简单,比如二范数即可。但是对于已知参数的两个确定分布之间的差异,我们就要通过推导的方式来计算了。
下面对已知均值与协方差矩阵的两个多维高斯分布之间的KL散度进行推导。当然,因为便于分布之间的逼近,Wasserstein distance可能是衡量两个分布之间差异的更好方式,但这个有点难,以后再记录。
首先定义两个$n$维高斯分布如下:
$egin{aligned} &p(x) = frac{1}{(2pi)^{0.5n}|Sigma|^{0.5}}expleft(-frac{1}{2}(x-mu)^TSigma^{-1}(x-mu) ight)\ &q(x) = frac{1}{(2pi)^{0.5n}|L|^{0.5}}expleft(-frac{1}{2}(x-m)^T L^{-1}(x-m) ight)\ end{aligned}$
需要计算的是:
$egin{aligned} ext{KL}(p||q) = ext{E}_pleft(logfrac{p(x)}{q(x)} ight) end{aligned}$
为了方便说明,下面分步进行推导。首先:
$egin{aligned} frac{p(x)}{q(x)} &= frac {frac{1}{(2pi)^{0.5n}|Sigma|^{0.5}}expleft(-frac{1}{2}(x-mu)^TSigma^{-1}(x-mu) ight)} {frac{1}{(2pi)^{0.5n}|L|^{0.5}}expleft(-frac{1}{2}(x-m)^T L^{-1}(x-m) ight)}\ &=left(frac{|L|}{|Sigma|} ight)^{0.5}expleft(frac{1}{2}(x-m)^T L^{-1}(x-m) -frac{1}{2}(x-mu)^TSigma^{-1}(x-mu) ight) end{aligned}$
然后加上对数:
$egin{aligned} logfrac{p(x)}{q(x)} &= frac{1}{2}logfrac{|L|}{|Sigma|}+ frac{1}{2}(x-m)^T L^{-1}(x-m) - frac{1}{2}(x-mu)^TSigma^{-1}(x-mu) end{aligned}$
再加上期望:
$egin{aligned} ext{E}_plogfrac{p(x)}{q(x)} &=frac{1}{2}logfrac{|L|}{|Sigma|}+ ext{E}_pleft[frac{1}{2}(x-m)^T L^{-1}(x-m) - frac{1}{2}(x-mu)^TSigma^{-1}(x-mu) ight]\ &=frac{1}{2}logfrac{|L|}{|Sigma|}+ ext{E}_p ext{Tr}left[frac{1}{2}(x-m)^T L^{-1}(x-m) - frac{1}{2}(x-mu)^TSigma^{-1}(x-mu) ight]\ end{aligned}$
第二步是因为结果为标量,可以转换为计算迹的形式。接着由迹的平移不变性得:
$egin{align} &frac{1}{2}logfrac{|L|}{|Sigma|}+ ext{E}_p ext{Tr} left[ frac{1}{2}L^{-1}(x-m)(x-m)^T - frac{1}{2}Sigma^{-1}(x-mu)(x-mu)^T ight]\ = &frac{1}{2}logfrac{|L|}{|Sigma|}+ frac{1}{2} ext{E}_p ext{Tr} left(L^{-1}(x-m)(x-m)^T ight) - frac{1}{2} ext{E}_p ext{Tr} left(Sigma^{-1}(x-mu)(x-mu)^T ight) \ = &frac{1}{2}logfrac{|L|}{|Sigma|}+ frac{1}{2} ext{E}_p ext{Tr} left(L^{-1}(x-m)(x-m)^T ight) - frac{n}{2} end{align}$
其中最后一项是因为,首先期望与迹可以调换位置,然后$(x-mu)(x-mu)^T$在分布$p$下的期望就是对应的协方差矩阵$Sigma$,于是得到一个$n$维单位阵,再计算单位阵的迹为$n$。
接下来,把中间项提出来推导,得:
$egin{align} &frac{1}{2} ext{E}_p ext{Tr} left(L^{-1}(x-m)(x-m)^T ight)\ =&frac{1}{2} ext{Tr}left(L^{-1} ext{E}_p left(xx^T-xm^T-mx^T+mm^T ight) ight) \ =&frac{1}{2} ext{Tr}left(L^{-1} left(Sigma +mumu^T-2mu m^T+mm^T ight) ight) end{align}$
其中$ ext{E}_p(xx^T) = Sigma + mumu^T$推导如下:
$egin{aligned} Sigma &= ext{E}_pleft[(x-mu)(x-mu)^T ight]\ &= ext{E}_pleft(xx^T-xmu^T-mu x^T+mumu^T ight)\ &= ext{E}_pleft(xx^T ight)-2 ext{E}_pleft(xmu^T ight)+mumu^T \ &= ext{E}_pleft(xx^T ight)-mumu^T \ end{aligned}$
接着推导$(6)$式:
$egin{aligned} &frac{1}{2} ext{Tr}left(L^{-1} left(Sigma +mumu^T-2mu m^T+mm^T ight) ight) \ = &frac{1}{2} ext{Tr}left(L^{-1}Sigma +L^{-1} (mu-m)(mu-m)^T ight) \ = &frac{1}{2} ext{Tr}left(L^{-1}Sigma ight)+ frac{1}{2}(mu-m)L^{-1}(mu-m)^T \ end{aligned}$
最后代回$(3)$式,得到最终结果:
$egin{aligned} ext{E}_plogfrac{p(x)}{q(x)} =&frac{1}{2}left{ logfrac{|L|}{|Sigma|}+ ext{Tr}left(L^{-1}Sigma ight)+ (mu-m)L^{-1}(mu-m)^T - n ight} end{aligned}$