• 5 TensorFlow入门笔记之RNN实现手写数字识别


    ————————————————————————————————————

    写在开头:此文参照莫烦python教程(墙裂推荐!!!)

    ————————————————————————————————————

    循环神经网络RNN

    相关名词:
    - LSTM:长短期记忆
    - 梯度消失/梯度离散
    - 梯度爆炸
    - 输入控制:控制是否把当前记忆加入主线网络
    - 忘记控制:控制是否暂时忘记主线网络,先看当前分线
    - 输出控制: 控制输出是否要考虑要素
    - 数据有顺序的/序列化
    - 前面的影响后面的

    RNN LSTM 之分类

    识别手写数字

    • 识别手写数字
    • mnist数据集
    • 一行一行地识别
    rnn使用错误及修正
    • 错误一:

    错误描述: ValueError: Variable tf.nn.dynsmic_rnn/rnn/basic_lstm_cell/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:
    错误解决:看你的训练数据和测试数据是否放在同一个文件下,若是,要加上下面一句:

    #如果训练和测试数据存放在同一个文件中,一定要加下面这句!
    tf.reset_default_graph()  

    如果这时候出现了错误二,就用下面的解决方法:

    • 错误二
      错误描述:ValueError: Tensor(“tf.nn.dynsmic_rnn/rnn/Const:0”, shape=(1,), dtype=int32) must be from the same graph as Tensor(“ExpandDims:0”, shape=(1,), dtype=int32).
      错误解决:
    #把tf.reset_default_graph()  改为:
        tf.Graph()
    完整代码

    下面是完整的分类代码及结果

    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    
    #load data
    mnist = input_data.read_data_sets('MNIST_data',one_hot = True)
    
    #参数
    lr = 0.001
    training_iters = 100000   #循环次数
    batch_size = 128      
    
    n_inputs = 28    #因为照片是28*28,而每次都读一行,所以input为28
    n_steps = 28     #因为有28行,所以要input28步    
    n_hidden_unis = 128  #隐藏层,自己设
    n_classes = 10  #10个数字(0-9),所以类别有10种
    
    #holder
    x = tf.placeholder(tf.float32,[None,n_steps,n_inputs])
    y = tf.placeholder(tf.float32,[None,n_classes])
    
    #定义权重
    weights = {
        #input weights(28,128)
        'in':tf.Variable(tf.random_normal([n_inputs,n_hidden_unis])),
        #output weights(128,10)
        'out':tf.Variable(tf.random_normal([n_hidden_unis,n_classes]))
    }
    
    #定义偏置
    biases = {
        'in':tf.Variable(tf.constant(0.1,shape=[n_hidden_unis,])),
        'out':tf.Variable(tf.constant(0.1,shape =[n_classes,]))
    }
    
    #定义RNN
    
    def RNN(X,weights,biasis):
        #hidden layer
    
        #X(128,28,28) ==>(128*28,28)
        X = tf.reshape(X,[-1,n_inputs])
        X_in =tf.matmul(X,weights['in']+biases['in'])  #(128*28,128)
        X_in = tf.reshape(X_in,[-1,n_steps,n_hidden_unis])#(128,28,128)
    
    
        #cell
    
        #forget_bais推荐初始化为1.0
        #with tf.variable_scope('lstm_cell'):
        lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis,forget_bias=1.0,state_is_tuple=True)
        #with tf.variable_scope('init_'):   
        init_state = lstm_cell.zero_state(batch_size,dtype=tf.float32)
        #output是个列表;这里实践维度在行,就是X_in的第二个,所以为false,时间维度为第一个,则true
        with tf.variable_scope('tf.nn.dynsmic_rnn'):
            outputs,states = tf.nn.dynamic_rnn(lstm_cell,X_in,initial_state=init_state,time_major=False)
    
        #output
        results = tf.matmul(states[1],weights['out']+biases['out'])
        ##other way,这里可用
        #outputs = tf.unpack(tf.transpose(outputs,[1,0,2]))
        #results = tf.matmuo(outputs[-1],weights['out']+biases['out'])
    
        return results
    
    #如果训练和测试数据存放在同一个文件中,一定要加下面这句!
    tf.Graph() 
    
    pred = RNN(x,weights,biases)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=pred))
    train_op = tf.train.AdamOptimizer(lr).minimize(cost)
    
    correct_pred = tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
    accuracy = tf.reduce_mean(tf.cast(correct_pred,tf.float32))
    
    init = tf.initialize_all_variables()
    with tf.Session() as sess:
        sess.run(init)
        step = 0
        while step*batch_size < training_iters:
            batch_xs,batch_ys = mnist.train.next_batch(batch_size)
            batch_xs = batch_xs.reshape([batch_size,n_steps,n_inputs])
            sess.run([train_op],feed_dict={x:batch_xs,y:batch_ys,})
            if step%50 == 0:
                print(sess.run(accuracy,feed_dict = {
                    x:batch_xs,y:batch_ys,
                }))
            step += 1
    Extracting MNIST_data	rain-images-idx3-ubyte.gz
    Extracting MNIST_data	rain-labels-idx1-ubyte.gz
    Extracting MNIST_data	10k-images-idx3-ubyte.gz
    Extracting MNIST_data	10k-labels-idx1-ubyte.gz
    0.2109375
    0.78125
    0.84375
    0.9140625
    0.921875
    0.921875
    0.9375
    0.9453125
    0.96875
    0.9140625
    0.953125
    0.984375
    0.9609375
    0.9453125
    0.96875
    0.9921875
    

    由上面的结果来看,RNN的效果还是很不错的!


    *点击[这儿:TensorFlow]发现更多关于TensorFlow的文章*


  • 相关阅读:
    PHP IDE NetBeans代码主题和除掉竖线解决方案
    初识Python
    从LazyPhp说起
    从Pycharm说起
    准备系统地研究一下"高性能网站开发",挑战很大,希望能坚持到底!
    IIS日志分析[资源]
    见一好东西:Threaded WebDownload class with Progress Callbacks
    ASP.net Application 中使用域用户登录
    看图找错
    汉字转拼音缩写的函数(C#)
  • 原文地址:https://www.cnblogs.com/surecheun/p/9648964.html
Copyright © 2020-2023  润新知