详见:http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/net_surgery.ipynb
假设使用标准的caffe参考ImageNet模型“CaffeNet”,将其转换为一个完全的卷积网络,以实现对大输入的高效、密集的推断。该模型生成一个分类图,它涵盖给定的输入大小,而不是单个分类。例如输入为451*451图片时,使用8*8全卷积分类,(也就是每8*8输出一个),得到了64倍个数的输出结果。时间仅仅用了3倍。通过对重叠接受域的计算进行了摊销,提高卷积神经网络结构的自然效率,
为了做到这一点,我们将caffe的内积矩阵的全连接层转化为卷积层。这是唯一的变化:无需关系其他层空间大小(也就是输入大小)。卷积具有传递不变性,激活是元素的运算,等等。fc6-full全连接层变成fc6-conv中进行卷积时,它变成了一个6*6的过滤器。请记住output map / receptive field size,output = (input - kernel_size) / stride + 1,并计算出清晰理解的索引细节。
# Load the original network and extract the fully connected layers' parameters. net = caffe.Net('../models/bvlc_reference_caffenet/deploy.prototxt', '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel', caffe.TEST) params = ['fc6', 'fc7', 'fc8'] # fc_params = {name: (weights, biases)} fc_params = {pr: (net.params[pr][0].data, net.params[pr][1].data) for pr in params} for fc in params: print '{} weights are {} dimensional and biases are {} dimensional'.format(fc, fc_params[fc][0].shape, fc_params[fc][1].shape)
# Load the fully convolutional network to transplant the parameters. net_full_conv = caffe.Net('net_surgery/bvlc_caffenet_full_conv.prototxt', '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel', caffe.TEST) params_full_conv = ['fc6-conv', 'fc7-conv', 'fc8-conv'] # conv_params = {name: (weights, biases)} conv_params = {pr: (net_full_conv.params[pr][0].data, net_full_conv.params[pr][1].data) for pr in params_full_conv} for conv in params_full_conv: print '{} weights are {} dimensional and biases are {} dimensional'.format(conv, conv_params[conv][0].shape, conv_params[conv][1].shape)
fc6-conv weights are (4096, 256, 6, 6) dimensional and biases are (4096,) dimensional
fc7-conv weights are (4096, 4096, 1, 1) dimensional and biases are (4096,) dimensional
fc8-conv weights are (1000, 4096, 1, 1) dimensional and biases are (1000,) dimensional
同样的model在不同网络中有不同的作用。