• 机器学习01-梯度下降算法


    梯度下降算法要点

    • 梯度下降的均方差误差函数
      在这里插入图片描述
    • 循环更新参数的方式
      在这里插入图片描述
    • 两个参数偏导后
      在这里插入图片描述
      在这里插入图片描述
    import numpy as np
    
    # data = []
    # for i in range(100):
    # 	x = np.random.uniform(3., 12.)
    # 	# mean=0, std=0.1
    # 	eps = np.random.normal(0., 0.1)
    # 	y = 1.477 * x + 0.089 + eps
    # 	data.append([x, y])
    # data = np.array(data)
    # print(data.shape, data)
    
    # y = wx + b 线性回归算loss
    def compute_error_for_line_given_points(b, w, points):
        totalError = 0
        for i in range(0, len(points)):
            x = points[i, 0]
            y = points[i, 1]
            # computer mean-squared-error
            totalError += (y - (w * x + b)) ** 2
        # average loss for each point
        return totalError / float(len(points))
    
    
    # 梯度下降算loss
    def step_gradient(b_current, w_current, points, learningRate):
        b_gradient = 0
        w_gradient = 0
        N = float(len(points))
        for i in range(0, len(points)):
            x = points[i, 0]
            y = points[i, 1]
            # 这下面的上面文字公式已经有解释了
            # grad_b = 2(wx+b-y)
            b_gradient += (2/N) * ((w_current * x + b_current) - y)
            # grad_w = 2(wx+b-y)*x
            w_gradient += (2/N) * x * ((w_current * x + b_current) - y)
        # update w' ,learning rate随便设的
        new_b = b_current - (learningRate * b_gradient)
        new_w = w_current - (learningRate * w_gradient)
        return [new_b, new_w]
    
    def gradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iterations):
        b = starting_b
        w = starting_w
        # update for several times
        for i in range(num_iterations):
            b, w = step_gradient(b, w, np.array(points), learning_rate)
        return [b, w]
    
    
    def run():
    	
        points = np.genfromtxt("data.csv", delimiter=",")
        learning_rate = 0.0001
        initial_b = 0 # initial y-intercept guess
        initial_w = 0 # initial slope guess
        num_iterations = 1000
        print("Starting gradient descent at b = {0}, w = {1}, error = {2}"
              .format(initial_b, initial_w,
                      compute_error_for_line_given_points(initial_b, initial_w, points))
              )
        print("Running...")
        [b, w] = gradient_descent_runner(points, initial_b, initial_w, learning_rate, num_iterations)
        print("After {0} iterations b = {1}, w = {2}, error = {3}".
              format(num_iterations, b, w,
                     compute_error_for_line_given_points(b, w, points))
              )
    
    if __name__ == '__main__':
        run()
    
  • 相关阅读:
    luoguP4336 [SHOI2016]黑暗前的幻想乡 容斥原理 + 矩阵树定理
    luoguP4208 [JSOI2008]最小生成树计数 矩阵树定理
    luoguP2303 [SDOI2012]Longge的问题 化式子
    poj1704 Georgia and Bob 博弈论
    poj3537 Crosses and Crosses 博弈论
    luoguP4783 [模板]矩阵求逆 线性代数
    luoguP5108 仰望半月的夜空 [官方?]题解 后缀数组 / 后缀树 / 后缀自动机 + 线段树 / st表 + 二分
    [Luogu5319][BJOI2019]奥术神杖(分数规划+AC自动机)
    Forethought Future Cup
    Codeforces Round 554 (Div.2)
  • 原文地址:https://www.cnblogs.com/SiriusZHT/p/14310775.html
Copyright © 2020-2023  润新知