• Sparse Autoencoder(一)


    Neural Networks

    We will use the following diagram to denote a single neuron:

    SingleNeuron.png

    This "neuron" is a computational unit that takes as input x1,x2,x3 (and a +1 intercept term), and outputs 	extstyle h_{W,b}(x) = f(W^Tx) = f(sum_{i=1}^3 W_{i}x_i +b), where f : Re mapsto Re is called the activation function. In these notes, we will choose f(cdot) to be the sigmoid function:

    
f(z) = frac{1}{1+exp(-z)}.

    Thus, our single neuron corresponds exactly to the input-output mapping defined by logistic regression.

    Although these notes will use the sigmoid function, it is worth noting that another common choice for f is the hyperbolic tangent, or tanh, function:

    
f(z) = 	anh(z) = frac{e^z - e^{-z}}{e^z + e^{-z}},

    Here are plots of the sigmoid and tanh functions:

    Sigmoid activation function. Tanh activation function.

    Finally, one identity that'll be useful later: If f(z) = 1 / (1 + exp( − z)) is the sigmoid function, then its derivative is given by f'(z) = f(z)(1 − f(z))

    sigmoid 函数 或 tanh 函数都可用来完成非线性映射

    Neural Network model

    A neural network is put together by hooking together many of our simple "neurons," so that the output of a neuron can be the input of another. For example, here is a small neural network:

    Network331.png

    In this figure, we have used circles to also denote the inputs to the network. The circles labeled "+1" are called bias units, and correspond to the intercept term. The leftmost layer of the network is called the input layer, and the rightmost layer the output layer (which, in this example, has only one node). The middle layer of nodes is called the hidden layer, because its values are not observed in the training set. We also say that our example neural network has 3 input units (not counting the bias unit), 3 hidden units, and 1 output unit.

    Our neural network has parameters (W,b) = (W(1),b(1),W(2),b(2)), where we write W^{(l)}_{ij} to denote the parameter (or weight) associated with the connection between unit j in layer l, and unit i in layerl + 1. (Note the order of the indices.) Also, b^{(l)}_i is the bias associated with unit i in layer l + 1.

    We will write a^{(l)}_i to denote the activation (meaning output value) of unit i in layer l. For l = 1, we also use a^{(1)}_i = x_i to denote the i-th input. Given a fixed setting of the parameters W,b, our neural network defines a hypothesis hW,b(x) that outputs a real number. Specifically, the computation that this neural network represents is given by:

    
egin{align}
a_1^{(2)} &= f(W_{11}^{(1)}x_1 + W_{12}^{(1)} x_2 + W_{13}^{(1)} x_3 + b_1^{(1)})  \
a_2^{(2)} &= f(W_{21}^{(1)}x_1 + W_{22}^{(1)} x_2 + W_{23}^{(1)} x_3 + b_2^{(1)})  \
a_3^{(2)} &= f(W_{31}^{(1)}x_1 + W_{32}^{(1)} x_2 + W_{33}^{(1)} x_3 + b_3^{(1)})  \
h_{W,b}(x) &= a_1^{(3)} =  f(W_{11}^{(2)}a_1^{(2)} + W_{12}^{(2)} a_2^{(2)} + W_{13}^{(2)} a_3^{(2)} + b_1^{(2)}) 
end{align}

    每层都是线性组合 + 非线性映射

    In the sequel, we also let z^{(l)}_i denote the total weighted sum of inputs to unit i in layer l, including the bias term (e.g., 	extstyle z_i^{(2)} = sum_{j=1}^n W^{(1)}_{ij} x_j + b^{(1)}_i), so that a^{(l)}_i = f(z^{(l)}_i).

    Note that this easily lends itself to a more compact notation. Specifically, if we extend the activation function f(cdot) to apply to vectors in an element-wise fashion (i.e., f([z1,z2,z3]) = [f(z1),f(z2),f(z3)]), then we can write the equations above more compactly as:

    egin{align}
z^{(2)} &= W^{(1)} x + b^{(1)} \
a^{(2)} &= f(z^{(2)}) \
z^{(3)} &= W^{(2)} a^{(2)} + b^{(2)} \
h_{W,b}(x) &= a^{(3)} = f(z^{(3)})
end{align}

    We call this step forward propagation.

    Backpropagation Algorithm

    for a single training example (x,y), we define the cost function with respect to that single example to be:

    
egin{align}
J(W,b; x,y) = frac{1}{2} left| h_{W,b}(x) - y 
ight|^2.
end{align}

    This is a (one-half) squared-error cost function. Given a training set of m examples, we then define the overall cost function to be:

    
egin{align}
J(W,b)
&= left[ frac{1}{m} sum_{i=1}^m J(W,b;x^{(i)},y^{(i)}) 
ight]
                       + frac{lambda}{2} sum_{l=1}^{n_l-1} ; sum_{i=1}^{s_l} ; sum_{j=1}^{s_{l+1}} left( W^{(l)}_{ji} 
ight)^2
 \
&= left[ frac{1}{m} sum_{i=1}^m left( frac{1}{2} left| h_{W,b}(x^{(i)}) - y^{(i)} 
ight|^2 
ight) 
ight]
                       + frac{lambda}{2} sum_{l=1}^{n_l-1} ; sum_{i=1}^{s_l} ; sum_{j=1}^{s_{l+1}} left( W^{(l)}_{ji} 
ight)^2
end{align}

    J(W,b;x,y) is the squared error cost with respect to a single example; J(W,b) is the overall cost function, which includes the weight decay term.

    Our goal is to minimize J(W,b) as a function of W and b. To train our neural network, we will initialize each parameter W^{(l)}_{ij}and each b^{(l)}_i to a small random value near zero (say according to a Normal(0,ε2) distribution for some small ε, say 0.01), and then apply an optimization algorithm such as batch gradient descent.Finally, note that it is important to initialize the parameters randomly, rather than to all 0's. If all the parameters start off at identical values, then all the hidden layer units will end up learning the same function of the input (more formally, W^{(1)}_{ij} will be the same for all values of i, so that a^{(2)}_1 = a^{(2)}_2 = a^{(2)}_3 = ldots for any input x). The random initialization serves the purpose of symmetry breaking.

    One iteration of gradient descent updates the parameters W,b as follows:

    
egin{align}
W_{ij}^{(l)} &= W_{ij}^{(l)} - alpha frac{partial}{partial W_{ij}^{(l)}} J(W,b) \
b_{i}^{(l)} &= b_{i}^{(l)} - alpha frac{partial}{partial b_{i}^{(l)}} J(W,b)
end{align}
    
egin{align}
frac{partial}{partial W_{ij}^{(l)}} J(W,b) &=
left[ frac{1}{m} sum_{i=1}^m frac{partial}{partial W_{ij}^{(l)}} J(W,b; x^{(i)}, y^{(i)}) 
ight] + lambda W_{ij}^{(l)} \
frac{partial}{partial b_{i}^{(l)}} J(W,b) &=
frac{1}{m}sum_{i=1}^m frac{partial}{partial b_{i}^{(l)}} J(W,b; x^{(i)}, y^{(i)})
end{align}

    The two lines above differ slightly because weight decay is applied to W but not b.

    The intuition behind the backpropagation algorithm is as follows. Given a training example (x,y), we will first run a "forward pass" to compute all the activations throughout the network, including the output value of the hypothesis hW,b(x). Then, for each node i in layer l, we would like to compute an "error term" delta^{(l)}_i that measures how much that node was "responsible" for any errors in our output.

    For an output node, we can directly measure the difference between the network's activation and the true target value, and use that to define delta^{(n_l)}_i (where layer nl is the output layer). For hidden units, we will compute delta^{(l)}_i based on a weighted average of the error terms of the nodes that uses a^{(l)}_i as an input. In detail, here is the backpropagation algorithm:

    • 1,Perform a feedforward pass, computing the activations for layers L2, L3, and so on up to the output layer L_{n_l}.

    2,For each output unit i in layer nl (the output layer), set

    
egin{align}
delta^{(n_l)}_i
= frac{partial}{partial z^{(n_l)}_i} ;;
        frac{1}{2} left|y - h_{W,b}(x)
ight|^2 = - (y_i - a^{(n_l)}_i) cdot f'(z^{(n_l)}_i)
end{align}

    For l = n_l-1, n_l-2, n_l-3, ldots, 2

    For each node i in layer l, set 
                 delta^{(l)}_i = left( sum_{j=1}^{s_{l+1}} W^{(l)}_{ji} delta^{(l+1)}_j 
ight) f'(z^{(l)}_i)

    4,Compute the desired partial derivatives, which are given as: 
egin{align}
frac{partial}{partial W_{ij}^{(l)}} J(W,b; x, y) &= a^{(l)}_j delta_i^{(l+1)} \
frac{partial}{partial b_{i}^{(l)}} J(W,b; x, y) &= delta_i^{(l+1)}.
end{align}

    We will use "	extstyle ullet" to denote the element-wise product operator (denoted ".*" in Matlab or Octave, and also called the Hadamard product), so that if 	extstyle a = b ullet c, then 	extstyle a_i = b_ic_i. Similar to how we extended the definition of 	extstyle f(cdot) to apply element-wise to vectors, we also do the same for 	extstyle f'(cdot)(so that 	extstyle f'([z_1, z_2, z_3]) =
[f'(z_1),
f'(z_2),
f'(z_3)]).

    The algorithm can then be written:

    1. 1,Perform a feedforward pass, computing the activations for layers 	extstyle L_2, 	extstyle L_3, up to the output layer 	extstyle L_{n_l}, using the equations defining the forward propagation steps

    2,For the output layer (layer 	extstyle n_l), set

    egin{align}
delta^{(n_l)}
= - (y - a^{(n_l)}) ullet f'(z^{(n_l)})
end{align}
     

    3,For 	extstyle l = n_l-1, n_l-2, n_l-3, ldots, 2

    Set
    egin{align}
                 delta^{(l)} = left((W^{(l)})^T delta^{(l+1)}
ight) ullet f'(z^{(l)})
                 end{align}
     

    4,Compute the desired partial derivatives:

    egin{align}

abla_{W^{(l)}} J(W,b;x,y) &= delta^{(l+1)} (a^{(l)})^T, \

abla_{b^{(l)}} J(W,b;x,y) &= delta^{(l+1)}.
end{align}
     

    Implementation note: In steps 2 and 3 above, we need to compute 	extstyle f'(z^{(l)}_i) for each value of 	extstyle i. Assuming 	extstyle f(z) is the sigmoid activation function, we would already have 	extstyle a^{(l)}_i stored away from the forward pass through the network. Thus, using the expression that we worked out earlier for 	extstyle f'(z), we can compute this as 	extstyle f'(z^{(l)}_i) = a^{(l)}_i (1- a^{(l)}_i).

    Finally, we are ready to describe the full gradient descent algorithm. In the pseudo-code below, 	extstyle Delta W^{(l)} is a matrix (of the same dimension as 	extstyle W^{(l)}), and 	extstyle Delta b^{(l)} is a vector (of the same dimension as 	extstyle b^{(l)}). Note that in this notation, "	extstyle Delta W^{(l)}" is a matrix, and in particular it isn't "	extstyle Delta times 	extstyle W^{(l)}." We implement one iteration of batch gradient descent as follows:

    1. 1,Set 	extstyle Delta W^{(l)} := 0, 	extstyle Delta b^{(l)} := 0 (matrix/vector of zeros) for all 	extstyle l.
    2. 2,For 	extstyle i = 1 to 	extstyle m,
      1. Use backpropagation to compute 	extstyle 
abla_{W^{(l)}} J(W,b;x,y) and 	extstyle 
abla_{b^{(l)}} J(W,b;x,y).
      2. Set 	extstyle Delta W^{(l)} := Delta W^{(l)} + 
abla_{W^{(l)}} J(W,b;x,y).
      3. Set 	extstyle Delta b^{(l)} := Delta b^{(l)} + 
abla_{b^{(l)}} J(W,b;x,y).
    3. 3,Update the parameters:
      egin{align}
W^{(l)} &= W^{(l)} - alpha left[ left(frac{1}{m} Delta W^{(l)} 
ight) + lambda W^{(l)}
ight] \
b^{(l)} &= b^{(l)} - alpha left[frac{1}{m} Delta b^{(l)}
ight]
end{align}
  • 相关阅读:
    mybatis连接MySQL8.0出现的问题
    zqsb项目中发现没有getMSSVideoList
    idea快速实现接口的方法
    鸟枪换炮---IDEA
    IDEA的使用---常用的快捷键
    MQ消息队列
    token的主要用法?
    oracle中游标的使用
    乐观锁和悲观锁
    分布式系统的事务控制
  • 原文地址:https://www.cnblogs.com/sprint1989/p/3978712.html
Copyright © 2020-2023  润新知