• keras框架的MLP手写数字识别MNIST,梳理?


    keras框架的MLP手写数字识别MNIST

    代码:

      1 # coding: utf-8
      2 
      3 # In[1]:
      4 
      5 
      6 import numpy as np
      7 import pandas as pd
      8 
      9 from keras.utils import np_utils
     10 np.random.seed(10)
     11 
     12 
     13 # In[2]:
     14 
     15 
     16 from keras.datasets import mnist
     17 
     18 
     19 # In[3]:
     20 
     21 
     22 (x_train_image,y_train_label),(x_test_image,y_test_label) = mnist.load_data()
     23 
     24 
     25 # In[4]:
     26 
     27 
     28 import matplotlib.pyplot as plt
     29 def plot_image(image):
     30     fig = plt.gcf()
     31     fig.set_size_inches(1,1)
     32     plt.imshow(image,cmap='binary')
     33     plt.show()
     34 
     35 
     36 # In[5]:
     37 
     38 
     39 plot_image(x_train_image[0])
     40 y_train_label[0]
     41 
     42 
     43 # In[6]:
     44 
     45 
     46 def plot_image_labels_prediction(images,labels,prediction,idx,num=10):
     47     fig = plt.gcf()
     48     fig.set_size_inches(12,24)
     49     if num>50 : num = 50
     50     for i in range(0,num):
     51         ax = plt.subplot(10,5,1+i)
     52         ax.imshow(images[idx],cmap='binary')
     53         title = "lable="+str(labels[idx])
     54         if len(prediction)>0:
     55             title+=",predict="+str(prediction[idx])
     56         ax.set_title(title,fontsize=10)
     57         ax.set_xticks([]);ax.set_yticks([])
     58         idx+=1
     59     plt.show()                                  
     60 
     61 
     62 # In[7]:
     63 
     64 
     65 plot_image_labels_prediction(x_train_image,y_train_label,[],0,10)
     66 
     67 
     68 # In[8]:
     69 
     70 
     71 x_train = x_train_image.reshape(60000,784).astype('float32')
     72 x_test = x_test_image.reshape(10000,784).astype('float32')
     73 
     74 
     75 # In[9]:
     76 
     77 
     78 x_train_normalize = x_train/255
     79 x_test_normalize = x_test/255
     80 
     81 
     82 # In[10]:
     83 
     84 
     85 y_train_oneHot = np_utils.to_categorical(y_train_label)
     86 y_test_oneHot = np_utils.to_categorical(y_test_label)
     87 
     88 
     89 # In[11]:
     90 
     91 
     92 from keras.models import Sequential
     93 from keras.layers import Dense
     94 from keras.layers import Dropout
     95 
     96 
     97 # In[12]:
     98 
     99 
    100 model = Sequential()
    101 
    102 
    103 # In[13]:
    104 
    105 
    106 model.add(Dense(units = 1000, 
    107                 input_dim=784,
    108                kernel_initializer = 'normal',
    109                activation = 'relu'))
    110 
    111 
    112 # In[14]:
    113 
    114 
    115 
    116 model.add(Dropout(0.5))
    117 
    118 
    119 # In[15]:
    120 
    121 
    122 model.add(Dense(units = 10, 
    123                kernel_initializer = 'normal',
    124                activation = 'sigmoid'))
    125 
    126 
    127 # In[16]:
    128 
    129 
    130 print(model.summary())
    131 
    132 
    133 # In[17]:
    134 
    135 
    136 model.compile(loss='categorical_crossentropy',
    137              optimizer='adam',metrics=['accuracy'])
    138 
    139 
    140 # In[18]:
    141 
    142 
    143 train_history = model.fit(x=x_train_normalize,
    144                          y=y_train_oneHot,
    145                          validation_split = 0.2,
    146                          epochs = 10,
    147                          batch_size = 200,
    148                          verbose = 2)
    149 
    150 
    151 # In[19]:
    152 
    153 
    154 def show_train_history(train_history,train,validation):
    155     plt.plot(train_history.history[train])
    156     plt.plot(train_history.history[validation])
    157     plt.title('Train_History')
    158     plt.ylabel(train)
    159     plt.xlabel('Epoch')
    160     plt.legend(['train','validation'], loc = 'upper left')
    161     plt.show()
    162 
    163 
    164 # In[20]:
    165 
    166 
    167 show_train_history(train_history,'acc','val_acc')
    168 
    169 
    170 # In[21]:
    171 
    172 
    173 score = model.evaluate(x_test_normalize,y_test_oneHot)
    174 print()
    175 print('accurary=',score[1])
    176 
    177 
    178 # In[22]:
    179 
    180 
    181 prediction = model.predict_classes(x_test_normalize)
    182 
    183 
    184 # In[23]:
    185 
    186 
    187 prediction
    188 
    189 
    190 # In[24]:
    191 
    192 
    193 plot_image_labels_prediction(x_test_image,
    194                              y_test_label,
    195                              prediction,
    196                              0,
    197                             50)
    198 
    199 
    200 # In[25]:
    201 
    202 
    203 pd.crosstab(y_test_label,
    204             prediction,
    205            rownames=['label'],
    206            colnames=['predict'])
    207 
    208 
    209 # In[ ]:
    210 
    211 
    212 
    213 
    214 
    215 # In[ ]:
    View Code

    MLP多层感知器 Multi-Layer Perceptron

    导入需要的模块,读取MNIST数据,读取数据下载慢的话:其它途径下载然后放...Users....kerasdatasets路径下,文件在这里:https://pan.baidu.com/s/1LkOJhMzRM2uizXufnJUxlw

    将28X28图像转换为784点,将784点的像素值转化为0-1之间的数值。

    导入并建立相应的模型,设置输入层,隐层,输出层的点数,设置隐层,输出层的激活函数,设置初始权值的正态分布。

    查看建立的神经网络

    设置损失函数,优化器,评估方案,开始训练神经网络,训练神经网络时,传入输入数据,期望数据,训练数据与验证数据的分割比例,训练迭代次数,每批次处理的数量,显示训练过程。开始训练并显示。

    训练结束以后,评估准确率,绘制曲线,图像实际预测结果等。进行效果评估。

  • 相关阅读:
    创建类type (底层代码)
    getitem, setitem, delitem (把类实例化成字典的类型)
    类的三种方法(静态方法,类方法,属性方法)
    class(类的使用说明)
    Java泛型
    Java时间操作常用api
    JVM加载class文件的原理机制
    int和Integer区别
    final
    Anonymous Inner Class (匿名内部类) 是否可以extends(继承)其它类,是否可以implements(实现)interface(接口)?
  • 原文地址:https://www.cnblogs.com/bai2018/p/10355557.html
Copyright © 2020-2023  润新知