• 卷积神经网络CNN


    基本概念

    • 卷积运算
      • 定义:$f(i,j,k)=sum_{m,n}g(i-m,j-n,k)h(m,n),y(i,j)=sum_kw_{k}f(i,j,k)$
        • 平移不变、深度线性叠加。特别在1*1核的时候,为深度的线性变换。
      • 稀疏交互(sparse interactions):核的大小(m,n的范围)远小于输入的大小(j,i的范围)。
      • 参数共享(parameter sharing):一个核只有(M*N+K)个参数。
    • 池化函数使用某一位置的相邻输出的总体统计特征来代替网络在该位置的输出。
      • 降采样、消除特征的位移。
    • 有三种基本策略可以不通过监督训练而得到卷积核。
      • 随机初始化、手动设计、无监督学习。

    几种典型结构

    lenet-5

    使用卷积、池化、非线性映射(tanh或者sigmoid)。

    model = Sequential()
    model.add(Conv2D(filters=6, kernel_size=(5,5), padding='valid', input_shape=(1,28,28), activation='tanh'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Conv2D(filters=16, kernel_size=(5,5), padding='valid', activation='tanh'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Flatten())
    model.add(Dense(120, activation='tanh'))
    model.add(Dense(84, activation='tanh'))
    model.add(Dense(10, activation='softmax'))

    alexnet

    • 引入了Relu激活函数。

    • 使用了Dropout。

    • 加强了训练机制:使用了GPU,进行了数据增强。

    model = Sequential()  
    model.add(Conv2D(96,(11,11),strides=(4,4),input_shape=(227,227,3),padding='valid',activation='relu',kernel_initializer='uniform'))  
    model.add(MaxPooling2D(pool_size=(3,3),strides=(2,2)))  
    model.add(Conv2D(256,(5,5),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(MaxPooling2D(pool_size=(3,3),strides=(2,2)))  
    model.add(Conv2D(384,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(Conv2D(384,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(Conv2D(256,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(MaxPooling2D(pool_size=(3,3),strides=(2,2)))  
    model.add(Flatten())  
    model.add(Dense(4096,activation='relu'))  
    model.add(Dropout(0.5))  
    model.add(Dense(4096,activation='relu'))  
    model.add(Dropout(0.5))  
    model.add(Dense(1000,activation='softmax')) 

    vgg13

    • 训练:对图片进行多尺度缩放、迁移学习
    model = Sequential()  
    model.add(Conv2D(64,(3,3),strides=(1,1),input_shape=(224,224,3),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(Conv2D(64,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(MaxPooling2D(pool_size=(2,2)))  
    model.add(Conv2D(128,(3,2),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(Conv2D(128,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(MaxPooling2D(pool_size=(2,2)))  
    model.add(Conv2D(256,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(Conv2D(256,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(MaxPooling2D(pool_size=(2,2)))  
    model.add(Conv2D(512,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(Conv2D(512,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(MaxPooling2D(pool_size=(2,2)))  
    model.add(Conv2D(512,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(Conv2D(512,(3,3),strides=(1,1),padding='same',activation='relu',kernel_initializer='uniform'))  
    model.add(MaxPooling2D(pool_size=(2,2)))  
    model.add(Flatten())  
    model.add(Dense(4096,activation='relu'))  
    model.add(Dropout(0.5))  
    model.add(Dense(4096,activation='relu'))  
    model.add(Dropout(0.5))  
    model.add(Dense(1000,activation='softmax')) 

    inception

    • 在结构上考虑多尺度
     def Conv2d_BN(x, nb_filter,kernel_size, padding='same',strides=(1,1),name=None):  
        if name is not None:  
            bn_name = name + '_bn'  
            conv_name = name + '_conv'  
        else:  
            bn_name = None  
            conv_name = None    
        x = Conv2D(nb_filter,kernel_size,padding=padding,strides=strides,activation='relu',name=conv_name)(x)  
        x = BatchNormalization(axis=3,name=bn_name)(x)  
        return x  
      
    def Inception(x,nb_filter):  
        branch1x1 = Conv2d_BN(x,nb_filter,(1,1), padding='same',strides=(1,1),name=None)  
      
        branch3x3 = Conv2d_BN(x,nb_filter,(1,1), padding='same',strides=(1,1),name=None)  
        branch3x3 = Conv2d_BN(branch3x3,nb_filter,(3,3), padding='same',strides=(1,1),name=None)  
      
        branch5x5 = Conv2d_BN(x,nb_filter,(1,1), padding='same',strides=(1,1),name=None)  
        branch5x5 = Conv2d_BN(branch5x5,nb_filter,(1,1), padding='same',strides=(1,1),name=None)  
      
        branchpool = MaxPooling2D(pool_size=(3,3),strides=(1,1),padding='same')(x)  
        branchpool = Conv2d_BN(branchpool,nb_filter,(1,1),padding='same',strides=(1,1),name=None)  
      
        x = concatenate([branch1x1,branch3x3,branch5x5,branchpool],axis=3)  
      
        return x 

    resnet

    • 为较深层次设置快速通道,提高梯度传播的有效性
     def Conv_Block(inpt,nb_filter,kernel_size,strides=(1,1), with_conv_shortcut=False):  
        x = Conv2d_BN(inpt,nb_filter=nb_filter,kernel_size=kernel_size,strides=strides,padding='same')  
        x = Conv2d_BN(x, nb_filter=nb_filter, kernel_size=kernel_size,padding='same')  
        if with_conv_shortcut:  
            shortcut = Conv2d_BN(inpt,nb_filter=nb_filter,strides=strides,kernel_size=kernel_size)  
            x = add([x,shortcut])  
            return x  
        else:  
            x = add([x,inpt])  
            return x  

    参考文献

    • Deep learning, www.deeplearning.net
    • Lécun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11):2278-2324.
    • Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]// International Conference on Neural Information Processing Systems. Curran Associates Inc. 2012:1097-1105.
    • Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition[J]. Computer Science, 2014.
    • Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[J]. 2014:1-9.
    • He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition[J]. 2015:770-778.
    • keras实现常用深度学习模型LeNet,AlexNet,ZFNet,VGGNet,GoogleNet,Resnet, https://blog.csdn.net/wang1127248268/article/details/77258055
  • 相关阅读:
    【python】构造字典类型字典
    【python】序列化和反序列化
    【python】进程
    【python】类中属性方法@property使用
    【python】类中__slots__使用
    【python】类的继承和super关键字
    【python】类的访问限制
    【python】模块作用域
    【python】删除1~100的素数
    【python】函数相关知识
  • 原文地址:https://www.cnblogs.com/liuyunfeng/p/9349682.html
Copyright © 2020-2023  润新知