• TensorFlow1.0 线性回归


    import tensorflow as tf
    import numpy as np
    
    #create data
    
    x_data = np.random.rand(100).astype(np.float32)
    y_data = x_data*0.1+0.3
    
    Weights = tf.Variable(tf.random_uniform([1],-1.0,1.0))
    biases = tf.Variable(tf.zeros([1]))
    
    y = Weights*x_data+biases
    loss = tf.reduce_mean(tf.square(y-y_data))
    
    optimizer = tf.train.GradientDescentOptimizer(0.5)
    train = optimizer.minimize(loss)
    init = tf.initialize_all_variables()
    
    sess = tf.Session()
    sess.run(init)
    
    for step in range(201):
        sess.run(train)
        if step % 20 ==0:
            print(step,sess.run(Weights),sess.run(biases))
    
    
    0 [0.7417692] [-0.07732911]
    20 [0.30772722] [0.18689097]
    40 [0.16603212] [0.26404503]
    60 [0.12099022] [0.28857067]
    80 [0.10667235] [0.29636687]
    100 [0.10212099] [0.2988451]
    120 [0.10067423] [0.29963288]
    140 [0.10021434] [0.2998833]
    160 [0.10006816] [0.2999629]
    180 [0.10002167] [0.2999882]
    200 [0.10000689] [0.29999626]
    

      

    多思考也是一种努力,做出正确的分析和选择,因为我们的时间和精力都有限,所以把时间花在更有价值的地方。
  • 相关阅读:
    TSP-UK49687
    维度建模的基本原则
    差分约束系统
    随机过程初步
    随机过程——维纳过程
    Xilinx FPGA复位信号设计
    10 Row Abacus
    Python
    FX2LP与FPGA的简单批量回环
    DFT公式的一个简单示例
  • 原文地址:https://www.cnblogs.com/LiuXinyu12378/p/12495344.html
Copyright © 2020-2023  润新知