• 改善深层神经网络


    来自吴恩达深度学习视频改善深层神经网络 - 第一周作业。如果直接看代码对你来说有困难,参见:
    https://blog.csdn.net/u013733326/article/details/79847918
    这里实现了三种初始化的方法,分别是全零,较大权值,he初始化,并绘图做了效果对比。
    https://github.com/Hongze-Wang/Deep-Learning-Andrew-Ng/tree/master/homework 戳这里看完整版

    import numpy as np
    import matplotlib.pyplot as plt
    import sklearn
    import sklearn.datasets
    from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
    from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
    
    %matplotlib inline
    plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
    plt.rcParams['image.interpolation'] = 'nearest'
    plt.rcParams['image.cmap'] = 'gray'
    
    # load image dataset: blue/red dots in circles
    train_X, train_Y, test_X, test_Y = load_dataset()
    

    在这里插入图片描述

    def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
        """
        Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
        
        Arguments:
        X -- input data, of shape (2, number of examples)
        Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
        learning_rate -- learning rate for gradient descent 
        num_iterations -- number of iterations to run gradient descent
        print_cost -- if True, print the cost every 1000 iterations
        initialization -- flag to choose which initialization to use ("zeros","random" or "he")
        
        Returns:
        parameters -- parameters learnt by the model
        """
            
        grads = {}
        costs = [] # to keep track of the loss
        m = X.shape[1] # number of examples
        layers_dims = [X.shape[0], 10, 5, 1]
        
        # Initialize parameters dictionary.
        if initialization == "zeros":
            parameters = initialize_parameters_zeros(layers_dims)
        elif initialization == "random":
            parameters = initialize_parameters_random(layers_dims)
        elif initialization == "he":
            parameters = initialize_parameters_he(layers_dims)
    
        # Loop (gradient descent)
    
        for i in range(0, num_iterations):
    
            # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
            a3, cache = forward_propagation(X, parameters)
            
            # Loss
            cost = compute_loss(a3, Y)
    
            # Backward propagation.
            grads = backward_propagation(X, Y, cache)
            
            # Update parameters.
            parameters = update_parameters(parameters, grads, learning_rate)
            
            # Print the loss every 1000 iterations
            if print_cost and i % 1000 == 0:
                print("Cost after iteration {}: {}".format(i, cost))
                costs.append(cost)
                
        # plot the loss
        plt.plot(costs)
        plt.ylabel('cost')
        plt.xlabel('iterations (per hundreds)')
        plt.title("Learning rate =" + str(learning_rate))
        plt.show()
        
        return parameters
    
    # GRADED FUNCTION: initialize_parameters_zeros 
    
    def initialize_parameters_zeros(layers_dims):
        """
        Arguments:
        layer_dims -- python array (list) containing the size of each layer.
        
        Returns:
        parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
                        W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
                        b1 -- bias vector of shape (layers_dims[1], 1)
                        ...
                        WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
                        bL -- bias vector of shape (layers_dims[L], 1)
        """
        
        parameters = {}
        L = len(layers_dims)            # number of layers in the network
        
        for l in range(1, L):
            ### START CODE HERE ### (≈ 2 lines of code)
            parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
            parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
            ### END CODE HERE ###
        return parameters
    
    parameters = initialize_parameters_zeros([3,2,1])
    print("W1 = " + str(parameters["W1"]))
    print("b1 = " + str(parameters["b1"]))
    print("W2 = " + str(parameters["W2"]))
    print("b2 = " + str(parameters["b2"]))
    
    W1 = [[0. 0. 0.]
     [0. 0. 0.]]
    b1 = [[0.]
     [0.]]
    W2 = [[0. 0.]]
    b2 = [[0.]]
    
    parameters = model(train_X, train_Y, initialization = "zeros")
    print ("On the train set:")
    predictions_train = predict(train_X, train_Y, parameters)
    print ("On the test set:")
    predictions_test = predict(test_X, test_Y, parameters)
    
    Cost after iteration 0: 0.6931471805599453
    Cost after iteration 1000: 0.6931471805599453
    Cost after iteration 2000: 0.6931471805599453
    Cost after iteration 3000: 0.6931471805599453
    Cost after iteration 4000: 0.6931471805599453
    Cost after iteration 5000: 0.6931471805599453
    Cost after iteration 6000: 0.6931471805599453
    Cost after iteration 7000: 0.6931471805599453
    Cost after iteration 8000: 0.6931471805599453
    Cost after iteration 9000: 0.6931471805599453
    Cost after iteration 10000: 0.6931471805599455
    Cost after iteration 11000: 0.6931471805599453
    Cost after iteration 12000: 0.6931471805599453
    Cost after iteration 13000: 0.6931471805599453
    Cost after iteration 14000: 0.6931471805599453
    

    在这里插入图片描述

    On the train set:
    Accuracy: 0.5
    On the test set:
    Accuracy: 0.5
    
    print ("predictions_train = " + str(predictions_train))
    print ("predictions_test = " + str(predictions_test))
    
    predictions_train = [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0]]
    predictions_test = [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]
    
    plt.title("Model with Zeros initialization")
    axes = plt.gca()
    axes.set_xlim([-1.5,1.5])
    axes.set_ylim([-1.5,1.5])
    plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, np.squeeze(train_Y))
    

    在这里插入图片描述

    # GRADED FUNCTION: initialize_parameters_random
    
    def initialize_parameters_random(layers_dims):
        """
        Arguments:
        layer_dims -- python array (list) containing the size of each layer.
        
        Returns:
        parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
                        W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
                        b1 -- bias vector of shape (layers_dims[1], 1)
                        ...
                        WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
                        bL -- bias vector of shape (layers_dims[L], 1)
        """
        
        np.random.seed(3)               # This seed makes sure your "random" numbers will be the as ours
        parameters = {}
        L = len(layers_dims)            # integer representing the number of layers
        
        for l in range(1, L):
            ### START CODE HERE ### (≈ 2 lines of code)
            parameters['W' + str(l)] = np.random.randn(layers_dims[l],layers_dims[l-1]) * 10
            parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
            ### END CODE HERE ###
    
        return parameters
    
    parameters = initialize_parameters_random([3, 2, 1])
    print("W1 = " + str(parameters["W1"]))
    print("b1 = " + str(parameters["b1"]))
    print("W2 = " + str(parameters["W2"]))
    print("b2 = " + str(parameters["b2"]))
    
    W1 = [[ 17.88628473   4.36509851   0.96497468]
     [-18.63492703  -2.77388203  -3.54758979]]
    b1 = [[0.]
     [0.]]
    W2 = [[-0.82741481 -6.27000677]]
    b2 = [[0.]]
    
    parameters = model(train_X, train_Y, initialization = "random")
    print ("On the train set:")
    predictions_train = predict(train_X, train_Y, parameters)
    print ("On the test set:")
    predictions_test = predict(test_X, test_Y, parameters)
    
    C:Userswanghinit_utils.py:145: RuntimeWarning: divide by zero encountered in log
      logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
    C:Userswanghinit_utils.py:145: RuntimeWarning: invalid value encountered in multiply
      logprobs = np.multiply(-np.log(a3),Y) + np.multiply(-np.log(1 - a3), 1 - Y)
    Cost after iteration 0: inf
    Cost after iteration 1000: 0.6250982793959966
    Cost after iteration 2000: 0.5981216596703697
    Cost after iteration 3000: 0.5638417572298645
    Cost after iteration 4000: 0.5501703049199763
    Cost after iteration 5000: 0.5444632909664456
    Cost after iteration 6000: 0.5374513807000807
    Cost after iteration 7000: 0.4764042074074983
    Cost after iteration 8000: 0.39781492295092263
    Cost after iteration 9000: 0.3934764028765484
    Cost after iteration 10000: 0.3920295461882659
    Cost after iteration 11000: 0.38924598135108
    Cost after iteration 12000: 0.3861547485712325
    Cost after iteration 13000: 0.384984728909703
    Cost after iteration 14000: 0.3827828308349524
    

    在这里插入图片描述

    On the train set:
    Accuracy: 0.83
    On the test set:
    Accuracy: 0.86
    
    print (predictions_train)
    print (predictions_test)
    
    [[1 0 1 1 0 0 1 1 1 1 1 0 1 0 0 1 0 1 1 0 0 0 1 0 1 1 1 1 1 1 0 1 1 0 0 1
      1 1 1 1 1 1 1 0 1 1 1 1 0 1 0 1 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 1 1 1 1 0
      0 0 0 0 1 0 1 0 1 1 1 0 0 1 1 1 1 1 1 0 0 1 1 1 0 1 1 0 1 0 1 1 0 1 1 0
      1 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 1 0 0 1 0 1 1 1 1 1 1 1 0 1 1 0 0 1 1 0
      0 0 1 0 1 0 1 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 0 1 1 0 1 0 1 1 1 1 0 1 1 1
      1 0 1 0 1 0 1 1 1 1 0 1 1 0 1 1 0 1 1 0 1 0 1 1 1 0 1 1 1 0 1 0 1 0 0 1
      0 1 1 0 1 1 0 1 1 0 1 1 1 0 1 1 1 1 0 1 0 0 1 1 0 1 1 1 0 0 0 1 1 0 1 1
      1 1 0 1 1 0 1 1 1 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 1 1 1
      1 1 1 1 0 0 0 1 1 1 1 0]]
    [[1 1 1 1 0 1 0 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 1 0 1 0 1 1 1 1 1 0 0 0 0 1
      0 1 1 0 0 1 1 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 0 1 0
      1 1 1 1 1 0 1 0 0 1 0 0 0 1 1 0 1 1 0 0 0 1 1 0 1 1 0 0]]
    
    plt.title("Model with large random initialization")
    axes = plt.gca()
    axes.set_xlim([-1.5,1.5])
    axes.set_ylim([-1.5,1.5])
    plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, np.squeeze(train_Y))
    

    在这里插入图片描述

    # GRADED FUNCTION: initialize_parameters_he
    
    def initialize_parameters_he(layers_dims):
        """
        Arguments:
        layer_dims -- python array (list) containing the size of each layer.
        
        Returns:
        parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
                        W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
                        b1 -- bias vector of shape (layers_dims[1], 1)
                        ...
                        WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
                        bL -- bias vector of shape (layers_dims[L], 1)
        """
        
        np.random.seed(3)
        parameters = {}
        L = len(layers_dims) - 1 # integer representing the number of layers
         
        for l in range(1, L + 1):
            ### START CODE HERE ### (≈ 2 lines of code)
            parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2 / layers_dims[l-1])
            parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
            ### END CODE HERE ###
            
        return parameters
    
    parameters = initialize_parameters_he([2, 4, 1])
    print("W1 = " + str(parameters["W1"]))
    print("b1 = " + str(parameters["b1"]))
    print("W2 = " + str(parameters["W2"]))
    print("b2 = " + str(parameters["b2"]))
    
    W1 = [[ 1.78862847  0.43650985]
     [ 0.09649747 -1.8634927 ]
     [-0.2773882  -0.35475898]
     [-0.08274148 -0.62700068]]
    b1 = [[0.]
     [0.]
     [0.]
     [0.]]
    W2 = [[-0.03098412 -0.33744411 -0.92904268  0.62552248]]
    b2 = [[0.]]
    
    parameters = model(train_X, train_Y, initialization = "he")
    print ("On the train set:")
    predictions_train = predict(train_X, train_Y, parameters)
    print ("On the test set:")
    predictions_test = predict(test_X, test_Y, parameters)
    
    Cost after iteration 0: 0.8830537463419761
    Cost after iteration 1000: 0.6879825919728063
    Cost after iteration 2000: 0.6751286264523371
    Cost after iteration 3000: 0.6526117768893807
    Cost after iteration 4000: 0.6082958970572937
    Cost after iteration 5000: 0.5304944491717495
    Cost after iteration 6000: 0.4138645817071793
    Cost after iteration 7000: 0.3117803464844441
    Cost after iteration 8000: 0.23696215330322556
    Cost after iteration 9000: 0.18597287209206828
    Cost after iteration 10000: 0.15015556280371808
    Cost after iteration 11000: 0.12325079292273548
    Cost after iteration 12000: 0.09917746546525937
    Cost after iteration 13000: 0.08457055954024274
    Cost after iteration 14000: 0.07357895962677366
    

    在这里插入图片描述

    On the train set:
    Accuracy: 0.9933333333333333
    On the test set:
    Accuracy: 0.96
    
    plt.title("Model with He initialization")
    axes = plt.gca()
    axes.set_xlim([-1.5,1.5])
    axes.set_ylim([-1.5,1.5])
    plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, np.squeeze(train_Y))
    

    在这里插入图片描述

    Conclusion

    • 3-layer NN with zeros initialization
      fails to break symmetry(预测值全零,未能打破对称性。对称性指每一层神经网络都在计算相同的线性函数)
      Accuracy: 50%

    • 3-layer NN with large random initialization
      too large weights(权值太大导致预测结果不准确)
      Accuracy: 83%

    • 3-layer NN with He initialization
      recommended method
      Accuracy:99%

  • 相关阅读:
    Windows中更改Docker默认安装路径方法
    利用nvm 安装多版本nodejs
    vue安装成功,vue V查看版本失败
    linux 删除并不记录,操作历史命令
    nw.js是什么
    如何开启Win10虚拟机HyperV功能
    win10 安装docker
    js生成随机颜色
    Flink出租车车程事件流和付车费事件流connect
    Flink基于 DataStream API 实现欺诈检测
  • 原文地址:https://www.cnblogs.com/wanghongze95/p/13842538.html
Copyright © 2020-2023  润新知