• ResNet50的tensorflow实现


    最近在看残差网络的论文,然后看了很多网上实现的代码,我发现很多人写代码是没有逻辑的,其实那个代码写得压根就不对,只是可能恰巧结果对,然后我不明白明明池化很简单的道理,非要说成什么降采样,给我整的看论文看得我一脸蒙逼,现在的模型适合大多数数据集的几乎不存在,我参考论文网上的帖子,实现了resnet50,但是我没训练,因为没有好的224*224的数据集,硬盘太小,大的程序也跑不起来,今天把代码贴出来,然后如果需要的话拿去参考。还有就是复原模型最重要的就是搞清楚网络的结构,最好是看着结构图来做模型,这样你就会很清楚每一层的tensor是如何进行变化的,然后就不会那么蒙圈了,在网上找了个不错的网络结构图,分享给大家 https://blog.csdn.net/haoji007/article/details/90259359

    然后下面就是代码了,我相信只要你仔细看过残差网络的论文,你都会理解的,然后我打算今晚再做一个18层的残差网络用来训练cifar10,看一下训练的效果,下面是代码

    import tensorflow as tf
    import tensorflow.contrib.slim as slim
    
    WEIGHT_DECAY = 0.01
    
    # 这段是我之前看别人的帖子上做的,但是后来我做的过程中我发现其实并没有什么用,
    # 改变featuremap的大小用卷积也完全可以实现,所以我把它注释掉,发现也是正常的
    # 但是由于tensorflow老大哥最近又开始迭代更新了,所以可能会有warning,但是
    # 没什么大影响
    
    # def sampling(input_tensor,
    #              ksize=1,
    #              stride=2):
    #     data = input_tensor
    #     if stride > 1:
    #         data = slim.max_pool2d(data, ksize, stride=stride)
    #         print('sampling', 2)
    #     return data
    
    
    def conv2d_same(input_tensor,
                    num_outputs,
                    kernel_size,
                    stride,
                    is_train=True,
                    activation_fn=tf.nn.relu,
                    normalizer_fc=True
                    ):
        data = input_tensor
        if stride is 1:
            data = slim.conv2d(inputs=data,
                               num_outputs=num_outputs,
                               kernel_size=kernel_size,
                               stride=stride,
                               weights_regularizer=slim.l2_regularizer(WEIGHT_DECAY),
                               activation_fn=None,
                               padding='SAME',
                               )
        else:
            pad_total = kernel_size - 1
            pad_begin = pad_total // 2
            pad_end = pad_total - pad_begin
            data = tf.pad(data, [[0, 0], [pad_begin, pad_end], [pad_begin, pad_end], [0, 0]])
            data = slim.conv2d(data,
                               num_outputs=num_outputs,
                               kernel_size=kernel_size,
                               stride=stride,
                               weights_regularizer=slim.l2_regularizer(WEIGHT_DECAY),
                               activation_fn=None,
                               padding='VALID',
                               )
        if normalizer_fc:
            data = tf.layers.batch_normalization(data, training=is_train)
        if activation_fn is not None:
            data = activation_fn(data)
        return data
    
    
    def bottle_net(input_tensor, output_depth, is_train, stride=1):
        data = input_tensor
        depth = input_tensor.get_shape().as_list()[-1]
        if depth == output_depth:
            shortcut_tensor = input_tensor
        else:
            shortcut_tensor = conv2d_same(input_tensor, output_depth, 1, stride, is_train=is_train, activation_fn=None,
                                          normalizer_fc=True)
        data = conv2d_same(data, output_depth // 4, 1, 1, is_train=is_train)
        data = conv2d_same(data, output_depth // 4, 3, stride, is_train=is_train)
        data = conv2d_same(data, output_depth, 1, 1, is_train=is_train, activation_fn=None, normalizer_fc=False)
    
        # 生成残差
        data = data + shortcut_tensor
        data = tf.nn.relu(data)
        return data
    
    
    def create_block(input_tensor, output_depth, block_nums, init_stride=1, is_train=True, scope='block'):
        with tf.variable_scope(scope):
            data = bottle_net(input_tensor, output_depth, is_train=is_train, stride=init_stride)
            for i in range(1, block_nums):
                data = bottle_net(data, output_depth, is_train=is_train)
            return data
    
    
    def ResNet(input_tensor, num_output, is_train, scope='resnet50'):
        data = input_tensor
        with tf.variable_scope(scope):
            data = conv2d_same(data, 64, 7, 2, is_train=is_train, normalizer_fc=True)
            data = slim.max_pool2d(data, 3, 2, padding='SAME', scope='pool_1')
            # 第一个残差块组
            data = create_block(data, 256, 3, init_stride=1, is_train=is_train, scope='block1')
    
            # 第二个残差块组
            data = create_block(data, 512, 4, init_stride=2, is_train=is_train, scope='block2')
    
            # 第三个残差块组
            data = create_block(data, 1024, 6, init_stride=2, is_train=is_train, scope='block3')
    
            # 第四个残差块组
            data = create_block(data, 2048, 3, init_stride=2, is_train=is_train, scope='block4')
    
            # 接下来就是池化层和全连接层
            data = slim.avg_pool2d(data, 7)
            data = slim.conv2d(data, num_output, 1, activation_fn=None, scope='final_conv')
    
            data_shape = data.get_shape().as_list()
            nodes = data_shape[1] * data_shape[2] * data_shape[3]
            data = tf.reshape(data, [-1, nodes])
    
            return data
    
    
    if __name__ == '__main__':
        x = tf.random_normal([32, 224, 224, 3])
        data = ResNet(x, 1000, True)
        print(data)

    能力极其有限,有错误的地方希望各位同学指正

  • 相关阅读:
    frp服务器搭建
    vue cli工具UI,AXIOS开发流程
    vue-cli 3.0之跨域请求代理配置及axios路径配置
    利用CSS、JavaScript及Ajax实现图片预加载的三大方法
    Preload图片预加载(jQuery插件)
    Unigui Basic jQuery学习
    emqtt 系统主题
    变量命名法
    Excel-VBA常用对象(Application、Workbook、Worksheet、Range)
    SqlServer对select * from (select *from table) 支持
  • 原文地址:https://www.cnblogs.com/daremosiranaihana/p/11655343.html
Copyright © 2020-2023  润新知