• 学习笔记TF054:TFLearn、Keras


    元框架(metaframework)。

    TFLearn。模块化深度学习框架,更高级API,快速实验,完全透明兼容。

    TFLearn实现AlexNet。
    https://github.com/tflearn/tflearn/blob/master/examples/images/alexnet.py
    牛津大学鲜花数据集(Flower Dataset)。http://www.robots.ox.ac.uk/~vgg/data/flowers/17/ 。提供17个类别鲜花数据,每个类别80张图片,有大量姿态、光变化。

    # -*- coding: utf-8 -*-
    """ AlexNet.
    Applying 'Alexnet' to Oxford's 17 Category Flower Dataset classification task.
    References:
    - Alex Krizhevsky, Ilya Sutskever & Geoffrey E. Hinton. ImageNet
    Classification with Deep Convolutional Neural Networks. NIPS, 2012.
    - 17 Category Flower Dataset. Maria-Elena Nilsback and Andrew Zisserman.
    Links:
    - [AlexNet Paper](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)
    - [Flower Dataset (17)](http://www.robots.ox.ac.uk/~vgg/data/flowers/17/)
    """
    from __future__ import division, print_function, absolute_import
    import tflearn
    from tflearn.layers.core import input_data, dropout, fully_connected
    from tflearn.layers.conv import conv_2d, max_pool_2d
    from tflearn.layers.normalization import local_response_normalization
    from tflearn.layers.estimator import regression
    import tflearn.datasets.oxflower17 as oxflower17
    # 加载数据
    X, Y = oxflower17.load_data(one_hot=True, resize_pics=(227, 227))
    # Building 'AlexNet' 构建网络模型
    network = input_data(shape=[None, 227, 227, 3])
    network = conv_2d(network, 96, 11, strides=4, activation='relu')
    network = max_pool_2d(network, 3, strides=2)
    network = local_response_normalization(network)
    network = conv_2d(network, 256, 5, activation='relu')
    network = max_pool_2d(network, 3, strides=2)
    network = local_response_normalization(network)
    network = conv_2d(network, 384, 3, activation='relu')
    network = conv_2d(network, 384, 3, activation='relu')
    network = conv_2d(network, 256, 3, activation='relu')
    network = max_pool_2d(network, 3, strides=2)
    network = local_response_normalization(network)
    network = fully_connected(network, 4096, activation='tanh')
    network = dropout(network, 0.5)
    network = fully_connected(network, 4096, activation='tanh')
    network = dropout(network, 0.5)
    network = fully_connected(network, 17, activation='softmax')
    network = regression(network, optimizer='momentum',
    loss='categorical_crossentropy',
    learning_rate=0.001)
    # Training 训练模型 载入AlexNet模型检查点文件
    model = tflearn.DNN(network, checkpoint_path='model_alexnet',
    max_checkpoints=1, tensorboard_verbose=2)
    model.fit(X, Y, n_epoch=1000, validation_set=0.1, shuffle=True,
    show_metric=True, batch_size=64, snapshot_step=200,
    snapshot_epoch=False, run_id='alexnet_oxflowers17')

    Keras。高级Python神经网络框架。https://keras.io/。TensorFlow默认框架。快速搭建原型。兼容Theano和TensorFlow。Keras高度封装,适合新手,代码更新快,示例代码多,文档、讨论区完善。自动调用GPU并行计算。模块化,模型神经层、成本函数、优化器、初始化、激活函数、规范化模块,组合创建模型。极简。易扩展,容易添加新模块。Python语言。

    Keras模型。Keras核心数据结构是模型。模型组织网络层。Sequential模型,网络层顺序构成栈,单输入单输出,层间只有相邻关系,简单模型。Model模型建立复杂模型。
    Sequential模型。加载完数据,X_train,Y_train,X_test,Y_test。构建模型:
    from keras.models import Sequential
    from keras.layers.core import Dense, Dropout,Activation

    model = Sequential()
    model.add(Dense(output_dim=64, input_dim=100))
    model.add(Activation('relu'))
    model.add(Dense(output_dim=10))
    model.add(Activation('softmax'))

    model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics='accuracy')
    model.fit(X_train, Y_train, batch_size=32, nb_epoch=5)
    loss_and_metrics = model.evaluate(X_test, Y_test, batch_size=32)

    Keras源码示例 https://github.com/fchollet/keras :CIFAR10-图片分类(CNN、实时数据)、IMDB-电影评论观点分类(LSTM)、Reuters-新闻主题分类(多层感知器)、MNIST-手写数字识别(多层感知器、CNN)、OCR-识别字符级文本生成(LSTM)。
    安装。pip install keras 。选择依赖后端,~/.keras/keras.json 修改最后一行"backend":"fensorflow" 。

    Keras实现卷积神经网络。
    https://github.com/fchollet/blob/master/examples/mnist_cnn.py 。

    #!/usr/bin/python
    # -*- coding:utf8 -*-
    '''Trains a simple convnet on the MNIST dataset.
    Gets to 99.25% test accuracy after 12 epochs
    (there is still a lot of margin for parameter tuning).
    16 seconds per epoch on a GRID K520 GPU.
    '''
    from __future__ import print_function
    import keras
    from keras.datasets import mnist
    from keras.models import Sequential
    from keras.layers import Dense, Dropout, Flatten
    from keras.layers import Conv2D, MaxPooling2D
    from keras import backend as K
    batch_size = 128
    num_classes = 10 # 分类数
    epochs = 12 # 训练轮数
    # input image dimensions 输入图片维度
    img_rows, img_cols = 28, 28
    # the data, shuffled and split between train and test sets
    # 加载数据
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    if K.image_data_format() == 'channels_first':
    #使用Theano顺序:(conv_dim1,channels,conv_dim2,conv_dim3)
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
    else:
    #使用TensorFlow顺序:(conv_dim1conv_dim2,conv_dim3,channels,)
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)
    x_train = x_train.astype('float32')
    x_test = x_test.astype('float32')
    x_train /= 255
    x_test /= 255
    print('x_train shape:', x_train.shape)
    print(x_train.shape[0], 'train samples')
    print(x_test.shape[0], 'test samples')
    # convert class vectors to binary class matrices 将类向量转换为二进制类矩阵
    y_train = keras.utils.to_categorical(y_train, num_classes)
    y_test = keras.utils.to_categorical(y_test, num_classes)
    #模型构建:2个卷积层、1个池化层、2个全连接层
    model = Sequential()
    model.add(Conv2D(32, kernel_size=(3, 3),
    activation='relu',
    input_shape=input_shape))
    model.add(Conv2D(64, (3, 3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(num_classes, activation='softmax'))
    # 模型编译
    model.compile(loss=keras.losses.categorical_crossentropy,
    optimizer=keras.optimizers.Adadelta(),
    metrics=['accuracy'])
    # 模型训练
    model.fit(x_train, y_train,
    batch_size=batch_size,
    epochs=epochs,
    verbose=1,
    validation_data=(x_test, y_test))
    # 模型评估
    score = model.evaluate(x_test, y_test, verbose=0)
    print('Test loss:', score[0])
    print('Test accuracy:', score[1])

    模型加载保存。 https://github.com/fchollet/keras/blob/master/tests/test_model_saving.py 。
    Keras的save_model、load_model方法将模型、权重保存到HDF5文件。包括模型结构、权重、训练配置(损失函数、优化器)。方便中断后再继续训练。
    from keras.models import save_model, load_model

    def test_sequential_model_saving():
    model = Sequential()
    model.add(Dense(2, input_shape=(3,)))
    model.add(RepeatVector(3))
    model.add(TimeDistributed(Dense(3)))
    model.compile(loss=losses.MSE,
    optimizer=optimizers.RMSprop(lr=0.0001),
    metrics=[metrics.categorical_accuracy],
    sample_weight_mode='temporal')
    x = np.random.random((1, 3))
    y = np.random.random((1, 3, 3))
    model.train_on_batch(x, y)
    out = model.predict(x)
    _, fname = tempfile.mkstemp('.h5') # 创建HDFS 5文件
    save_model(model, fname)
    new_model = load_model(fname)
    os.remove(fname)
    out2 = new_model.predict(x)
    assert_allclose(out, out2, atol=1e-05)
    # test that new updates are the same with both models
    # 检测新保存的模型和之前定义的模型是否一致
    x = np.random.random((1, 3))
    y = np.random.random((1, 3, 3))
    model.train_on_batch(x, y)
    new_model.train_on_batch(x, y)
    out = model.predict(x)
    out2 = new_model.predict(x)
    assert_allclose(out, out2, atol=1e-05)
    只保存模型结构。

    json_string = model.to_jsion()
    yaml_string = model.to_yaml()

    手动编辑。

    from keras.models import model_from_json
    model = model_from_json(json_string)
    model = model_from_yaml(yaml_string)

    只保存模型权重。

    model.save_weights('my_model_weights.h5')
    model.load_weights('my_model_weights.h5')

    参考资料:
    《TensorFlow技术解析与实战》

    欢迎推荐上海机器学习工作机会,我的微信:qingxingfengzi

  • 相关阅读:
    [NOIP2018-普及组] 对称二叉树
    UVA1637 【纸牌游戏 Double Patience】
    [SHOI2002]滑雪-题解
    题解 CF1437E 【Make It Increasing】
    题解 P4331 【[BalticOI 2004]Sequence 数字序列】
    NOIp 2020游记
    题解 P3825 【[NOI2017]游戏】
    题解 P6453 【[LNOI2014]LCA】
    题解 P6453 【[COCI2008-2009#4] F】
    题解 P5779 【[CTSC2001]聪明的学生】
  • 原文地址:https://www.cnblogs.com/libinggen/p/7768686.html
Copyright © 2020-2023  润新知