• 初次接触tensorflow


    要确保已经明白神经网络和卷积神经网络的原理.如果不明白,先学习参考资料1.tensorflow中有很多api,可以分成2大类.1类是比较低层的api(tf.train),叫TensorFlow Core.还有1种相对高层的api(tf.contrib.learn),是建立在TensorFlow Core基础上的,这种api码农用着很方便.

    环境

    python 3.5.3
    tensorflow 1.0.0

    Tensors

    TensorFlow中的基本数据是tensor. tensor可以直观地理解为把numpy中的数组又包了一层.tensor的runk表示tensor是几维的.比如

    [1. ,2., 3.] # runnk为1的tensor,它的shape是[3]
    [[1., 2., 3.], [4., 5., 6.]] # runnk为2的tensor,它的shape是[2, 3]
    [[[1., 2., 3.]], [[7., 8., 9.]]] # runnk为3的tensor,它的shape是[2, 1, 3]
    

    helloworld级使用

    使用tensorflow编程有2个步骤.第1是建立computational graph,第2是运行computational graph.computational graph中的每个结点都有0个或多个tensor作为输入.有一种结点本身是个常量,这种结点没有输入,有固定的输出(即它本身).下面是2个结点:

    node1 = tf.constant(3.0, tf.float32)
    node2 = tf.constant(4.0) # 默认类型就是tf.float32
    print(node1, node2)
    

    运行这个代码结果是

    Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)
    

    注意输出中没有具体的值3.0,4.0.这可以理解成建立computational graph.在通过Session运行computational graph的时候才会把值填到结点中.比如

    node1 = tf.constant(3.0, tf.float32)
    node2 = tf.constant(4.0) # 默认类型就是tf.float32
    sess = tf.Session()
    print(sess.run([node1, node2]))
    

    稍微复杂一点的例子.

    node1 = tf.constant(3.0, tf.float32)
    node2 = tf.constant(4.0) # 默认类型就是tf.float32
    node3 = tf.add(node1, node2)
    print("node3: ", node3)
    sess = tf.Session()
    print("sess.run(node3): ",sess.run(node3))
    

    placeholder

    placeholder有什么作用?placeholder可以用来先定义一种操作,执行的时候再具体赋值.比如python函数的定义

    def add(a, b)
        return a + b
    

    a,b都没有具体的值,调用的时候才赋值.不严谨但直观地可以把a,b理解为placeholder.下面看tensorflow的placeholder.

    a = tf.placeholder(tf.float32)
    b = tf.placeholder(tf.float32)
    adder_node = a + b
    print(sess.run(adder_node, {a: 3, b: 4.5}))  # 输出为7.5
    print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))  # 输出为[ 3.  7.]
    

    再看一个例子

    import tensorflow as tf
    a = tf.placeholder(tf.float32)
    b = tf.placeholder(tf.float32)
    adder_node = a + b
    add_and_triple = adder_node * 3
    sess = tf.Session()
    print(sess.run(add_and_triple, {a: 3, b: 4.5}))  # 输出为22.5
    
    

    Variable

    可以简单地认为在训练的各个参数即为Variable.看下面的例子.

    import tensorflow as tf
    W = tf.Variable([.3], tf.float32)
    b = tf.Variable([-.3], tf.float32)
    x = tf.placeholder(tf.float32)
    linear_model = W * x + b
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)
    print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
    
    

    输出为W * x + b的值.下面看怎么使用损失函数.

    import tensorflow as tf
    
    b = tf.Variable([-.3], tf.float32)
    x = tf.placeholder(tf.float32)
    linear_model = W * x + b
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)
    y = tf.placeholder(tf.float32)
    squared_deltas = tf.square(linear_model - y)
    loss = tf.reduce_sum(squared_deltas)
    print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))  # 损失函数是23.66
    
    

    修改w,b的值再看下损失函数.

    import tensorflow as tf
    W = tf.Variable([.3], tf.float32)
    b = tf.Variable([-.3], tf.float32)
    x = tf.placeholder(tf.float32)
    linear_model = W * x + b
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)
    
    y = tf.placeholder(tf.float32)
    squared_deltas = tf.square(linear_model - y)
    loss = tf.reduce_sum(squared_deltas)
    
    fixW = tf.assign(W, [-1.])
    fixb = tf.assign(b, [1.])
    sess.run([fixW, fixb])
    print(sess.run(loss, {x: [1, 2, 3, 4], y:[0, -1, -2, -3]}))  # 损失函数是0
    

    训练方法

    现在要解决如下问题:
    已知向量x=(1, 2, 3, 4),向量y=(0, -1, -2, -3),w,b是标量.求w,b使y=wx+b

    import tensorflow as tf
    
    W = tf.Variable([.3], tf.float32)
    b = tf.Variable([-.3], tf.float32)
    x = tf.placeholder(tf.float32)
    linear_model = W * x + b
    y = tf.placeholder(tf.float32)
    loss = tf.reduce_sum(tf.square(linear_model - y))
    optimizer = tf.train.GradientDescentOptimizer(0.01)
    train = optimizer.minimize(loss)
    x_train = [1,2,3,4]
    y_train = [0,-1,-2,-3]
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)
    for i in range(1000):
        sess.run(train, {x: x_train, y: y_train})
        # print(sess.run(loss, {x: x_train, y: y_train}))输出loss
    
    curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
    print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
    

    上面代码用梯度下降方法求出w,b.现在来验证下wx+b和y相差多少.

    import tensorflow as tf
    
    W = tf.constant([-0.9999969], tf.float32)
    b = tf.constant([0.99999082], tf.float32)
    x = tf.placeholder(tf.float32)
    linear_model = W * x + b
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init)
    print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
    

    输出为

    [ -6.07967377e-06  -1.00000298e+00  -1.99999988e+00  -2.99999666e+00]
    

    已经相当接近(0, -1, -2, -3).
    上面是用TensorFlow Core中的方法训练.也可以用较高层的api(tf.contrib.learn). 因为这种方法过于抽象,反而会分散初学都注意力.以后再补上.

    问题

    • 用较高层的api(tf.contrib.learn)训练

    参考资料

    1 http://cs231n.github.io/
    2 tensorflow官方教程

  • 相关阅读:
    工程师的十层楼,上
    工程师的十层楼 (下)
    2011CCTV中国经济年度人物评选结果揭晓
    IT行业程序员薪水差距之大的原因是什么
    单片机C应用开发班
    【分享】对输入子系统分析总结
    P6156 简单题 题解
    P3911 最小公倍数之和 题解
    dp 做题记录
    UVA12298 Super Poker II 题解
  • 原文地址:https://www.cnblogs.com/zhouyang209117/p/6505673.html
Copyright © 2020-2023  润新知