先看代码(sklearn的示例代码):
- from sklearn.neural_network import MLPClassifier
- X = [[0., 0.], [1., 1.]]
- y = [0, 1]
- clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
- hidden_layer_sizes=(5, 2), random_state=1)
- clf.fit(X, y)
- print 'predict ',clf.predict([[2., 2.], [-1., -2.]])
- print 'predict ',clf.predict_proba([[2., 2.], [1., 2.]])
- print 'clf.coefs_ contains the weight matrices that constitute the model parameters: ',[coef.shape for coef in clf.coefs_]
- print clf
- c=0
- for i in clf.coefs_:
- c+=1
- print c,len(i),i
说明:
MLPclassifier,MLP 多层感知器的的缩写(Multi-layer Perceptron)
fit(X,y) 与正常特征的输入输出相同
solver='lbfgs', MLP的求解方法:L-BFGS 在小数据上表现较好,Adam 较为鲁棒,SGD在参数调整较优时会有最佳表现(分类效果与迭代次数);
SGD标识随机梯度下降。疑问:SGD与反向传播算法的关系
alpha:L2的参数:MLP是可以支持正则化的,默认为L2,具体参数需要调整
hidden_layer_sizes=(5, 2) hidden层2层,第一层5个神经元,第二层2个神经元)
计算的时间复杂度(非常高。。。。):
Suppose there are n training samples, m features, k hidden layers, each
containing h neurons - for simplicity, and o output neurons. The time
complexity of backpropagation is O(ncdot m cdot h^k cdot o cdot i),
where i is the number of iterations. Since
backpropagation has a high time complexity, it is advisable to start
with smaller number of hidden neurons and few hidden layers for
training.
涉及到的设置:隐藏层数量k,每层神经元数量h,迭代次数i。