• CS231n assignment1 Q5 Level Representations: Image Feature


    这个作业是讨论对图像像素进行进一步计算得到的特征来训练线性分类器是否可以提高性能。
    对于每张图,我们会计算梯度方向直方图(HOG)特征和用HSV(Hue色调,Saturation饱和度,Value明度)颜色空间的色调特征。把每张图的梯度方向直方图和颜色直方图特征合并形成我们最后的特征向量。
    HOG大致可以捕捉到图像的纹理特征而忽略了颜色信息,颜色直方图会表示图像的颜色特征而忽略了纹理特征。

    svm的验证部分:

    # Use the validation set to tune the learning rate and regularization strength
    
    from cs231n.classifiers.linear_classifier import LinearSVM
    
    learning_rates = [1e-9, 1e-8, 1e-7]
    regularization_strengths = [5e4, 5e5, 5e6]
    
    results = {}
    best_val = -1
    best_svm = None
    
    ################################################################################
    # TODO:                                                                        #
    # Use the validation set to set the learning rate and regularization strength. #
    # This should be identical to the validation that you did for the SVM; save    #
    # the best trained classifer in best_svm. You might also want to play          #
    # with different numbers of bins in the color histogram. If you are careful    #
    # you should be able to get accuracy of near 0.44 on the validation set.       #
    ################################################################################
    for lr in learning_rates:
        for rs in regularization_strengths:
            svm = LinearSVM()
            loss_hist = svm.train(X_train_feats,y_train,lr,rs,num_iters=6000)
            y_train_pred = svm.predict(X_train_feats)
            train_accuracy = np.mean(y_train == y_train_pred)
            y_val_pred = svm.predict(X_val_feats)
            val_accuracy = np.mean(y_val == y_val_pred)
            if val_accuracy > best_val:
                best_val = val_accuracy
                best_svm = svm
            results[(lr,rs)] = train_accuracy,val_accuracy
    ################################################################################
    #                              END OF YOUR CODE                                #
    ################################################################################
    
    # Print out results.
    for lr, reg in sorted(results):
        train_accuracy, val_accuracy = results[(lr, reg)]
        print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
                    lr, reg, train_accuracy, val_accuracy))
        
    print('best validation accuracy achieved during cross-validation: %f' % best_val)
    

    最终测试集的准确率为0.422

    两层神经网络的验证部分:

    from cs231n.classifiers.neural_net import TwoLayerNet
    
    input_dim = X_train_feats.shape[1]
    hidden_dim = 500
    num_classes = 10
    
    net = TwoLayerNet(input_dim, hidden_dim, num_classes)
    best_net = None
    best_val = -1
    
    ################################################################################
    # TODO: Train a two-layer neural network on image features. You may want to    #
    # cross-validate various parameters as in previous sections. Store your best   #
    # model in the best_net variable.                                              #
    ################################################################################
    learning_rates = [1e-2,1e-1,5e-1,1,5]
    regularization_strengths = [1e-3,5e-3,1e-2,1e-1,0.5,1]
    
    for lr in learning_rates:
        for rs in regularization_strengths:
            net = TwoLayerNet(input_dim,hidden_dim,num_classes)
            stats = net.train(X_train_feats,y_train,X_val_feats,y_val,num_iters = 1500,
                              batch_size = 200,learning_rate = lr,learning_rate_decay = 0.95,reg= rs,verbose = False)
            val_acc = (net.predict(X_val_feats) == y_val).mean()
            if val_acc > best_val:
                best_val = val_acc
                best_net = net
            results[(lr,rs)] = val_acc
    
    for lr,rs in sorted(results):
        val_acc = results[(lr,rs)]
        print('lr:',lr,"reg:",rs,"accuracy:",val_acc)
    print('best validation accuracy achieved during cross-validation:%f' %best_val)
    ################################################################################
    #                              END OF YOUR CODE                                #
    ################################################################################
    

    最终测试集的准确率为0.579

  • 相关阅读:
    魔数,常见魔数
    正则表达式 —— 括号与特殊字符
    串行总线 —— I2C、UART、SPI
    OpenCV-Python sift/surf特征匹配与显示
    OpenCV-Python 边缘检测
    python中zip()函数基本用法
    OpenCv-Python 图像滤波
    获取WMI硬件清单
    PowerShell查询sql server
    别名的应用(New-Alias)
  • 原文地址:https://www.cnblogs.com/bernieloveslife/p/10179828.html
Copyright © 2020-2023  润新知