基本上讲,Adam就是将day8.2提到的momentum动量梯度下降法和day8.3提到的RMSprop算法相结合的优化算法
首先初始化 SdW = 0 Sdb = 0 VdW = 0 Vdb = 0
On iteration t:
compute dw,db using current Mini-batch
VdW = β1vdW + (1-β1)dW Vdb = β1vdb + (1-β1)db 先做momentum
SdW = β2SdW + (1-β2)dW2 Sdb = β2Sdb + (1-β2)db2 再做RMSprop
偏差修正:Vdwcorrected = vdW / (1 - β1t),Vdbcorrected = vdb / (1 - β1t)
Sdwcorrected = SdW / (1 - β2t),Sdbcorrected = Sdb / (1 - β2t)
W = W - α(Vdwcorrected / sqrt.Sdwcorrected+ε),b = b - α(Vdbcorrected / sqrt.Sdbcorrected+ε)
超参数的设定:
α:learning rate,需要一系列的尝试
β1:0.9 (为了计算dw) one moment一阶矩
β2:0.999 (为了计算dw2) second moment二阶矩
ε:10-8
注意除α需要设定外,Adam算法的β1、β2、ε三个参数都不必去设定,根据Andrew Ng的解释来看很少有业内人士改变Adam算法原文的这三个参数
Adam = adaptive moment estimation