逻辑回归的代价函数为
[Jleft( heta ight) = - left[ {frac{1}{m}sumlimits_{i = 1}^m {{y^{left( i ight)}}log {h_ heta }left( {{x^{left( i ight)}}} ight) + left( {1 - {y^{left( i ight)}}} ight)log left( {1 - {h_ heta }left( {{x^{left( i ight)}}} ight)} ight)} } ight]]
正则化后
[Jleft( heta ight) = - left[ {frac{1}{m}sumlimits_{i = 1}^m {{y^{left( i ight)}}log {h_ heta }left( {{x^{left( i ight)}}} ight) + left( {1 - {y^{left( i ight)}}} ight)log left( {1 - {h_ heta }left( {{x^{left( i ight)}}} ight)} ight)} } ight] + frac{lambda }{{2m}}sumlimits_{j = 1}^n { heta _j^2} ]
此时梯度下降算法为
重复{
[{ heta _0}: = { heta _0} - alpha left[ {frac{1}{m}sumlimits_{i = 1}^m {left( {{h_ heta }left( {{x^{left( i ight)}}} ight) - {y^{left( i ight)}}} ight)x_0^{left( i ight)}} } ight]]
[{ heta _j}: = { heta _j} - alpha left[ {frac{1}{m}sumlimits_{i = 1}^m {left( {{h_ heta }left( {{x^{left( i ight)}}} ight) - {y^{left( i ight)}}} ight)x_j^{left( i ight)} + frac{lambda }{m}{ heta _j}} } ight]left( {j = 1,2,...,n} ight)]
}
(注意:区分逻辑回归与线性回归的h(x))