• 手写数字识别-小数据集


    老师您好十分抱歉,第十一次的作业我并没有如期上交,因为我当时帮助家里打点店铺,结果疏漏了完成作业这件事,十分抱歉,下面是我第十一次作业的补交链接,烦请查收。

    链接:

    分类与监督学习,朴素贝叶斯分类算法

    1.手写数字数据集

    • from sklearn.datasets import load_digits
    • digits = load_digits()

     代码:

    from sklearn.datasets import load_digits
    import numpy as np
    from sklearn.model_selection import train_test_split
    from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
    
    digits = load_digits()
    Xd = digits.data.astype(np.float32)
    Yd = digits.target.astype(np.float32).reshape(-1, 1)

    2.图片数据预处理

    • x:归一化MinMaxScaler()
    • y:独热编码OneHotEncoder()或to_categorical
    • 训练集测试集划分
    • 张量结构

    代码:

    scaler = MinMaxScaler()
    Xd = scaler.fit_transform(Xd)
    print("归一化 Xd:")
    print(Xd)
    Y = OneHotEncoder().fit_transform(Yd).todense()
    print("独热编码 Y:")
    print(Y)
    X = Xd.reshape(-1, 8, 8, 1)
    X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=0, stratify=Y)
    print('X_train.shape, X_test.shape, y_train.shape, y_test.shape:', X_train.shape, X_test.shape, y_train.shape, y_test.shape)

    结果:

     

    3.设计卷积神经网络结构

    • 绘制模型结构图,并说明设计依据。

    结构图为:

    代码

    # 建立模型
    model = Sequential()
    #定义卷积核的大小
    ks = (3, 3)
    input_shape = X_train.shape[1:]
    # 一层卷积
    model.add(Conv2D(filters=16, kernel_size=ks, padding='same', input_shape=input_shape, activation='relu'))
    # 池化层(1)
    model.add(MaxPool2D(pool_size=(2, 2)))
    # 防止过拟合,随机丢掉连接
    model.add(Dropout(0.25))
    # 二层卷积
    model.add(Conv2D(filters=32, kernel_size=ks, padding='same', activation='relu'))
    # 池化层(2)
    model.add(MaxPool2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))
    # 三层卷积
    model.add(Conv2D(filters=64, kernel_size=ks, padding='same', activation='relu'))
    # 四层卷积
    model.add(Conv2D(filters=128, kernel_size=ks, padding='same', activation='relu'))
    # 池化层(3)
    model.add(MaxPool2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))
    # 平坦层
    model.add(Flatten())
    # 全连接层
    model.add(Dense(128, activation='relu'))
    model.add(Dropout(0.25))
    # 激活函数softmax
    model.add(Dense(10, activation='softmax'))
    print(model.summary())

    结果:

    4.模型训练

    代码:

    import matplotlib.pyplot as plt
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    train_history = model.fit(x=X_train, y=y_train, validation_split=0.2, batch_size=300, epochs=10, verbose=2)
    # 可视化绘图
    def show_train_history(train_history, train, validation):
        plt.plot(train_history.history[train])
        plt.plot(train_history.history[validation])
        plt.title('Train History')
        plt.ylabel('train')
        plt.xlabel('epoch')
        plt.legend(['train', 'validation'], loc='upper left')
        plt.show()
    # 准确率
    show_train_history(train_history, 'accuracy', 'val_accuracy')
    # 损失率
    show_train_history(train_history, 'loss', 'val_loss')

    结果:

    5.模型评价

    • model.evaluate()
    • 交叉表与交叉矩阵
    • pandas.crosstab
    • seaborn.heatmap

    代码:

    import pandas as pd
    import seaborn as sns
    score = model.evaluate(X_test, y_test)
    print('准确率为', score)
    y_pre = model.predict_classes(X_test)
    print('y_pred:', y_pre[:10])
    # 交叉表与交叉矩阵
    y_test1 = np.argmax(y_test, axis=1).reshape(-1)
    y_true = np.array(y_test1)[0]
    # 与原数据对比
    pd.crosstab(y_true, y_pre, rownames=['true'], colnames=['predict'])
    # 交叉矩阵
    y_test1 = y_test1.tolist()[0]
    a = pd.crosstab(np.array(y_test1), y_pre, rownames=['Lables'], colnames=['Predict'])
    df = pd.DataFrame(a)
    sns.heatmap(df, annot=True, cmap="Reds", linewidths=0.2, linecolor='G')
    plt.show()

    结果:

  • 相关阅读:
    linux中压缩、解压缩命令
    linux中的sed指令
    linux中shell编程(一)
    linux中的正则表达式
    linux中的管道和重定向
    linux中用户、组和权限相关指令
    linux中bash常见的指令
    linux文本操作相关指令
    java.lang.OutOfMemoryError 解决程序启动内存溢出问题
    Java常用排序算法/程序员必须掌握的8大排序算法
  • 原文地址:https://www.cnblogs.com/crjia/p/13089294.html
Copyright © 2020-2023  润新知