• Machine Learning Stanford (week 3)


    Classification

    To attempt classification, one method is to use linear regression and map all predictions greater than 0.5 as a 1 and all less than 0.5 as a 0. However, this method doesn’t work well because classification is not actually a linear function.

    The classification problem is just like the regression problem, except that the values we now want to predict take on only a small number of discrete values. For now, we will focus on the binary classification problem in which y can take on only two values, 0 and 1. (Most of what we say here will also generalize to the multiple-class case.) For instance, if we are trying to build a spam classifier for email, then x(i) may be some features of a piece of email, and y may be 1 if it is a piece of spam mail, and 0 otherwise. Hence, y∈{0,1}. 0 is also called the negative class, and 1 the positive class, and they are sometimes also denoted by the symbols “-” and “+.” Given x(i), the corresponding y(i) is also called the label for the training example.

    Hypothesis Representation

    We could approach the classification problem ignoring the fact that y is discrete-valued, and use our old linear regression algorithm to try to predict y given x. However, it is easy to construct examples where this method performs very poorly. Intuitively, it also doesn’t make sense for hθ(x) to take values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}. To fix this, let’s change the form for our hypotheses hθ(x) to satisfy 0≤hθ(x)≤1. This is accomplished by plugging θTx into the Logistic Function.

    Our new form uses the “Sigmoid Function,” also called the “Logistic Function”:
    hθ(x)=g(θTx)z=θTxg(z)=11+ez
    The following image shows us what the sigmoid function looks like:
    这里写图片描述

    The function g(z), shown here, maps any real number to the (0, 1) interval, making it useful for transforming an arbitrary-valued function into a function better suited for classification.

    hθ(x) will give us the probability that our output is 1. For example, hθ(x)=0.7 gives us a probability of 70% that our output is 1. Our probability that our prediction is 0 is just the complement of our probability that it is 1 (e.g. if probability that it is 1 is 70%, then the probability that it is 0 is 30%).

    hθ(x)=P(y=1|x;θ)=1P(y=0|x;θ)P(y=0|x;θ)+P(y=1|x;θ)=1

    Decision Boundary

    In order to get our discrete 0 or 1 classification, we can translate the output of the hypothesis function as follows:

    hθ(x)0.5y=1hθ(x)<0.5y=0
    The way our logistic function g behaves is that when its input is greater than or equal to zero, its output is greater than or equal to 0.5:

    g(z)0.5whenz0

    Remember.
    z=0,e0=1g(z)=1/2z,e0g(z)=1z,eg(z)=0

    So if our input to g is θTX, then that means:

    hθ(x)=g(θTx)0.5whenθTx0

    From these statements we can now say:
    θTx0y=1θTx<0y=0

    The decision boundary is the line that separates the area where y = 0 and where y = 1. It is created by our hypothesis function.

    Example:

    θ=510y=1if5+(1)x1+0x205x10x15x15
    In this case, our decision boundary is a straight vertical line placed on the graph where x1=5, and everything to the left of that denotes y = 1, while everything to the right denotes y = 0.

    Again, the input to the sigmoid function g(z) (e.g. θTX) doesn’t need to be linear, and could be a function that describes a circle (e.g. z=θ0+θ1x21+θ2x22) or any shape to fit our data.

    Cost Function

    We cannot use the same cost function that we use for linear regression because the Logistic Function will cause the output to be wavy, causing many local optima. In other words, it will not be a convex function.

    Instead, our cost function for logistic regression looks like:

    J(θ)=1mi=1mCost(hθ(x(i)),y(i))Cost(hθ(x),y)=log(hθ(x))Cost(hθ(x),y)=log(1hθ(x))if y = 1if y = 0

    When y = 1, we get the following plot for J(θ) vs hθ(x):
    这里写图片描述

    Similarly, when y = 0, we get the following plot for J(θ) vs hθ(x):
    这里写图片描述

    Cost(hθ(x),y)=0 if hθ(x)=yCost(hθ(x),y) if y=0andhθ(x)1Cost(hθ(x),y) if y=1andhθ(x)0

    If our correct answer ‘y’ is 0, then the cost function will be 0 if our hypothesis function also outputs 0. If our hypothesis approaches 1, then the cost function will approach infinity.

    If our correct answer ‘y’ is 1, then the cost function will be 0 if our hypothesis function outputs 1. If our hypothesis approaches 0, then the cost function will approach infinity.

    Note that writing the cost function in this way guarantees that J(θ) is convex for logistic regression.

    Simplified Cost Function and Gradient Descent

    Note: [6:53 - the gradient descent equation should have a 1/m factor]

    We can compress our cost function’s two conditional cases into one case:

    Cost(hθ(x),y)=ylog(hθ(x))(1y)log(1hθ(x))

    Notice that when y is equal to 1, then the second term(1y)log(1hθ(x)) will be zero and will not affect the result. If y is equal to 0, then the first term ylog(hθ(x)) will be zero and will not affect the result.

    We can fully write out our entire cost function as follows:

    J(θ)=1mi=1m[y(i)log(hθ(x(i)))+(1y(i))log(1hθ(x(i)))]

    A vectorized implementation is:

    h=g(Xθ)J(θ)=1m(yTlog(h)(1y)Tlog(1h))

    Gradient Descent

    Remember that the general form of gradient descent is:
    Repeat{θj:=θjαθjJ(θ)}

    We can work out the derivative part using calculus to get:
    Repeat{θj:=θjαmi=1m(hθ(x(i))y(i))x(i)j}

    Notice that this algorithm is identical to the one we used in linear regression. We still have to simultaneously update all values in theta.

    A vectorized implementation is:
    θ:=θαmXT(g(Xθ)y⃗ )

    相关证明网址 : http://lib.csdn.net/article/machinelearning/42803

    Advanced Optimization

    Note: [7:35 - ‘100’ should be 100 instead. The value provided should be an integer and not a character string.]

    “Conjugate gradient”, “BFGS”, and “L-BFGS” are more sophisticated, faster ways to optimize θ that can be used instead of gradient descent. We suggest that you should not write these more sophisticated algorithms yourself (unless you are an expert in numerical computing) but use the libraries instead, as they’re already tested and highly optimized. Octave provides them.

    We first need to provide a function that evaluates the following two functions for a given input value θ:

    J(θ)θjJ(θ)
    We can write a single function that returns both of these:

    function [jVal, gradient] = costFunction(theta)
      jVal = [...code to compute J(theta)...];
      gradient = [...code to compute derivative of J(theta)...];
    end

    Then we can use octave’s “fminunc()” optimization algorithm along with the “optimset()” function that creates an object containing the options we want to send to “fminunc()”. (Note: the value for MaxIter should be an integer, not a character string - errata in the video at 7:30)

    options = optimset('GradObj', 'on', 'MaxIter', 100);
    initialTheta = zeros(2,1);
       [optTheta, functionVal, exitFlag] = fminunc(@costFunction, initialTheta, options);

    We give to the function “fminunc()” our cost function, our initial vector of theta values, and the “options” object that we created beforehand.

    Multiclass Classification: One-vs-all

    Now we will approach the classification of data when we have more than two categories. Instead of y = {0,1} we will expand our definition so that y = {0,1…n}.

    Since y = {0,1…n}, we divide our problem into n+1 (+1 because the index starts at 0) binary classification problems; in each one, we predict the probability that ‘y’ is a member of one of our classes.
    y{0,1...n}h(0)θ(x)=P(y=0|x;θ)h(1)θ(x)=P(y=1|x;θ)h(n)θ(x)=P(y=n|x;θ)prediction=maxi(h(i)θ(x))
    We are basically choosing one class and then lumping all the others into a single second class. We do this repeatedly, applying binary logistic regression to each case, and then use the hypothesis that returned the highest value as our prediction.

    The following image shows how one could classify 3 classes:
    这里写图片描述

    To summarize:

    Train a logistic regression classifier hθ(x) for each class to predict the probability that  y = i .

    To make a prediction on a new x, pick the class that maximizes hθ(x)

    多分类问题只要把其他当前要分割的种类算作一类,其他的算作一类,然后对每个分类都这么操作,最后通过m次(m类)的分类,就可以实现最终的分类了。

    The Problem of Overfitting

    Consider the problem of predicting y from x ∈ R. The leftmost figure below shows the result of fitting a y=θ0+θ1x to a dataset. We see that the data doesn’t really lie on straight line, and so the fit is not very good.
    这里写图片描述

    Instead, if we had added an extra feature x2 , and fit y=θ0+θ1x+θ2x2 , then we obtain a slightly better fit to the data (See middle figure). Naively, it might seem that the more features we add, the better. However, there is also a danger in adding too many features: The rightmost figure is the result of fitting a 5th order polynomial y=5j=0θjxj. We see that even though the fitted curve passes through the data perfectly, we would not expect this to be a very good predictor of, say, housing prices (y) for different living areas (x). Without formally defining what these terms mean, we’ll say the figure on the left shows an instance of underfitting—in which the data clearly shows structure not captured by the model—and the figure on the right is an example of overfitting.

    Underfitting, or high bias, is when the form of our hypothesis function h maps poorly to the trend of the data. It is usually caused by a function that is too simple or uses too few features. At the other extreme, overfitting, or high variance, is caused by a hypothesis function that fits the available data but does not generalize well to predict new data. It is usually caused by a complicated function that creates a lot of unnecessary curves and angles unrelated to the data.

    This terminology is applied to both linear and logistic regression. There are two main options to address the issue of overfitting:

    1) Reduce the number of features:

    Manually select which features to keep.
    Use a model selection algorithm (studied later in the course).
    2) Regularization

    Keep all the features, but reduce the magnitude of parameters θj.
    Regularization works well when we have a lot of slightly useful features.

    Cost Function

    Note: [5:18 - There is a typo. It should be nj=1θ2j instead of ni=1θ2j

    If we have overfitting from our hypothesis function, we can reduce the weight that some of the terms in our function carry by increasing their cost.

    Say we wanted to make the following function more quadratic:
    θ0+θ1x+θ2x2+θ3x3+θ4x4

    We’ll want to eliminate the influence of θ3x3 and θ4x4 . Without actually getting rid of these features or changing the form of our hypothesis, we can instead modify our cost function:
    minθ 12mmi=1(hθ(x(i))y(i))2+1000θ23+1000θ24

    We’ve added two extra terms at the end to inflate the cost of θ3 and θ4. Now, in order for the cost function to get close to zero, we will have to reduce the values of θ3 and θ4 to near zero. This will in turn greatly reduce the values of θ3x3 and θ4x4 in our hypothesis function. As a result, we see that the new hypothesis (depicted by the pink curve) looks like a quadratic function but fits the data better due to the extra small terms θ3x3 and θ4x4 .

    We could also regularize all of our theta parameters in a single summation as:

    minθ 12m mi=1(hθ(x(i))y(i))2+λ nj=1θ2j
    The λ, or lambda, is the regularization parameter. It determines how much the costs of our theta parameters are inflated.

    Using the above cost function with the extra summation, we can smooth the output of our hypothesis function to reduce overfitting. If lambda is chosen to be too large, it may smooth out the function too much and cause underfitting. Hence, what would happen if λ=0 or is too small ?

    正则化的是θ1...θn,没有θ0 ,所以在看如下问题。
    这里写图片描述

    Regularized Linear Regression

    Note: [8:43 - It is said that X is non-invertible if m ≤ n. The correct statement should be that X is non-invertible if m < n, and may be non-invertible if m = n.

    We can apply regularization to both linear regression and logistic regression. We will approach linear regression first.

    Gradient Descent

    We will modify our gradient descent function to separate out θ0 from the rest of the parameters because we do not want to penalize θ0.
    Repeat {    θ0:=θ0α 1m i=1m(hθ(x(i))y(i))x(i)0    θj:=θjα [(1m i=1m(hθ(x(i))y(i))x(i)j)+λmθj]}          j{1,2...n}

    The term λmθj performs our regularization. With some manipulation our update rule can also be represented as:

    θj:=θj(1αλm)α1mmi=1(hθ(x(i))y(i))x(i)j

    The first term in the above equation, 1−αλm will always be less than 1. Intuitively you can see it as reducing the value of θj by some amount on every update. Notice that the second term is now exactly the same as it was before.

    Normal Equation

    Now let’s approach regularization using the alternate method of the non-iterative normal equation.

    To add in regularization, the equation is the same as our original, except that we add another term inside the parentheses:
    θ=(XTX+λL)1XTywhere  L=0111
    L is a matrix with 0 at the top left and 1’s down the diagonal, with 0’s everywhere else. It should have dimension (n+1)×(n+1). Intuitively, this is the identity matrix (though we are not including x0), multiplied with a single real number λ.

    Recall that if m < n, then XTX is non-invertible. However, when we add the term λ⋅L, then XTX + λ⋅L becomes invertible.(这个功能很关键)
    正规方程加入正则化之后XTX就可逆了!!!

    Regularized Logistic Regression

    We can regularize logistic regression in a similar way that we regularize linear regression. As a result, we can avoid overfitting. The following image shows how the regularized function, displayed by the pink line, is less likely to overfit than the non-regularized function represented by the blue line:
    这里写图片描述

    Cost Function

    Recall that our cost function for logistic regression was:

    J(θ)=1mmi=1[y(i) log(hθ(x(i)))+(1y(i)) log(1hθ(x(i)))]

    We can regularize this equation by adding a term to the end:

    J(θ)=1mmi=1[y(i) log(hθ(x(i)))+(1y(i)) log(1hθ(x(i)))]+λ2mnj=1θ2j

    The second sum, nj=1θ2j means to explicitly exclude the bias term, θ0. I.e. the θ vector is indexed from 0 to n (holding n+1 values, θ0 through θn), and this sum explicitly skips θ0, by running from 1 to n, skipping 0. Thus, when computing the equation, we should continuously update the two following equations:
    这里写图片描述

    这里写图片描述

    Homework!

    sigmoid 函数:

    % Instructions: Compute the sigmoid of each value of z (z can be a matrix, vector or scalar).

    g = 1 ./ ( 1 + exp(-z) ) ;

    cost函数以及其偏导

    J= -1 * sum( y .* log( sigmoid(X*theta) ) + (1 - y ) .* log( (1 - sigmoid(X*theta)) ) ) / m ; % cost function

    grad = (X’ * (sigmoid(X*theta) - y)) ; % 偏导数 partial derivative

    通过MATLAB advanced函数递归下降方式计算theta(θ)

    % In this exercise, you will use a built-in function (fminunc) to find the
    % optimal parameters theta.

    % Set options for fminunc
    options = optimset(‘GradObj’, ‘on’, ‘MaxIter’, 400);

    % Run fminunc to obtain the optimal theta
    % This function will return theta and the cost
    [theta, cost] = fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);

    正则化后相应的修改

    theta_1=[0;theta(2:end)]; % 先把theta(1)拿掉,不参与正则化
    J= -1 * sum( y .* log( sigmoid(X*theta) ) + (1 - y ) .* log( (1 - sigmoid(X*theta)) ) ) / m + lambda/(2*m) * theta_1’ * theta_1 ;
    grad = ( X’ * (sigmoid(X*theta) - y ) )/ m + lambda/m * theta_1 ;

  • 相关阅读:
    个性实用的SQL语句
    SiteMesh简介
    oracle基本操作
    java项目中获得不同状态下的磁盘根目录和相对目录。
    linux下安装weblogic 10.3.2.0 及mysql数据源的配置
    SQL Serve 查询所有可用的数据库语句
    [传智播客学习日记]简单工厂模式计算器案例
    [传智播客学习日记]SqlHelper与DataSet
    [传智播客学习日记]序列化、XML序列化与深拷贝操作
    [传智播客学习日记]10月18日第一天正式上课
  • 原文地址:https://www.cnblogs.com/zswbky/p/8454073.html
Copyright © 2020-2023  润新知