• java版libsvm的使用


    调用libsvm.jar
    import java.io.IOException;
    public class Tlibsvm {

    public static void main(String[] args) throws IOException { // svm_problem
    String[] arg = {
    "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\test.libsvm",
    "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\model" };
    String[] parg = {
    "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\train.libsvm",
    "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\model",
    "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\ouput" };

    //svm_train t = new svm_train();
    //svm_predict p = new svm_predict();

    try {
    svm_train.main(arg);
    svm_predict.main(parg);
    } catch (IOException e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
    }

    }
    }

     预测类别:

    import java.io.IOException;

    import libsvm.svm;
    import libsvm.svm_model;
    import libsvm.svm_node;

    public class Tlibsvm {
    public static double predictPerRecord(double[] record, svm_model model) {
    svm_node[] x = new svm_node[record.length];
    for (int i = 0; i < record.length; i++) {
    svm_node node = new svm_node();
    node.index = i;
    node.value = record[i];
    x[i] = node;
    }
    double v = svm.svm_predict(model, x);
    return v;
    }

    public static void main(String[] args) throws IOException { // svm_problem
    /*
    * String[] arg = {
    * "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\test.libsvm"
    * , "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\model" };
    * String[] parg = {
    * "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\train.libsvm"
    * , "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\model",
    * "D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\ouput" };
    *
    * //svm_train t = new svm_train(); //svm_predict p = new svm_predict();
    *
    * try { svm_train.main(arg); svm_predict.main(parg); } catch
    * (IOException e) { // TODO Auto-generated catch block
    * e.printStackTrace(); }
    */

    double[] record = { 5.872288, 7.133121, 7.133121, 8.188928, 6.971858,
    6.971858, 5.442672, 5.693101, 5.693101, 5.693101, 5.693101 };
    svm_model model = svm
    .svm_load_model("D:\\2_1500_rfinish\\TextCategorization_2_1500_10\\data\\model");
    double c = Tlibsvm.predictPerRecord(record, model);
    System.out.println(c);

    }
    }

    cmd:

    java -classpath libsvm.jar svm_train <arguments>
    java -classpath libsvm.jar svm_predict <arguments>
    java -classpath libsvm.jar svm_toy
    java -classpath libsvm.jar svm_scale <arguments>

    `svm-train' Usage
    =================

    Usage: svm-train [options] training_set_file [model_file]
    options:
    -s svm_type : set type of SVM (default 0)
        0 -- C-SVC
        1 -- nu-SVC
        2 -- one-class SVM
        3 -- epsilon-SVR
        4 -- nu-SVR
    -t kernel_type : set type of kernel function (default 2)
        0 -- linear: u'*v
        1 -- polynomial: (gamma*u'*v + coef0)^degree
        2 -- radial basis function: exp(-gamma*|u-v|^2)
        3 -- sigmoid: tanh(gamma*u'*v + coef0)
        4 -- precomputed kernel (kernel values in training_set_file)
    -d degree : set degree in kernel function (default 3)
    -g gamma : set gamma in kernel function (default 1/num_features)
    -r coef0 : set coef0 in kernel function (default 0)
    -c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
    -n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
    -p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
    -m cachesize : set cache memory size in MB (default 100)
    -e epsilon : set tolerance of termination criterion (default 0.001)
    -h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)
    -b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)
    -wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)
    -v n: n-fold cross validation mode
    -q : quiet mode (no outputs)


    The k in the -g option means the number of attributes in the input data.

    option -v randomly splits the data into n parts and calculates cross
    validation accuracy/mean squared error on them.

    See libsvm FAQ for the meaning of outputs.

    `svm-predict' Usage
    ===================

    Usage: svm-predict [options] test_file model_file output_file
    options:
    -b probability_estimates: whether to predict probability estimates, 0 or 1 (default 0); for one-class SVM only 0 is supported

    model_file is the model file generated by svm-train.
    test_file is the test data you want to predict.
    svm-predict will produce output in the output_file.

    `svm-scale' Usage
    =================

    Usage: svm-scale [options] data_filename
    options:
    -l lower : x scaling lower limit (default -1)
    -u upper : x scaling upper limit (default +1)
    -y y_lower y_upper : y scaling limits (default: no y scaling)
    -s save_filename : save scaling parameters to save_filename
    -r restore_filename : restore scaling parameters from restore_filename

    Tips on Practical Use
    =====================

    * Scale your data. For example, scale each attribute to [0,1] or [-1,+1].
    * For C-SVC, consider using the model selection tool in the tools directory.
    * nu in nu-SVC/one-class-SVM/nu-SVR approximates the fraction of training
      errors and support vectors.
    * If data for classification are unbalanced (e.g. many positive and
      few negative), try different penalty parameters C by -wi (see
      examples below).
    * Specify larger cache size (i.e., larger -m) for huge problems.

    Examples
    ========

    > svm-scale -l -1 -u 1 -s range train > train.scale
    > svm-scale -r range test > test.scale

    Scale each feature of the training data to be in [-1,1]. Scaling
    factors are stored in the file range and then used for scaling the
    test data.

    > svm-train -s 0 -c 5 -t 2 -g 0.5 -e 0.1 data_file

    Train a classifier with RBF kernel exp(-0.5|u-v|^2), C=10, and
    stopping tolerance 0.1.

    > svm-train -s 3 -p 0.1 -t 0 data_file

    Solve SVM regression with linear kernel u'v and epsilon=0.1
    in the loss function.

    > svm-train -c 10 -w1 1 -w2 5 -w4 2 data_file

    Train a classifier with penalty 10 = 1 * 10 for class 1, penalty 50 =
    5 * 10 for class 2, and penalty 20 = 2 * 10 for class 4.

    > svm-train -s 0 -c 100 -g 0.1 -v 5 data_file

    Do five-fold cross validation for the classifier using
    the parameters C = 100 and gamma = 0.1

    > svm-train -s 0 -b 1 data_file
    > svm-predict -b 1 test_file data_file.model output_file

    Obtain a model with probability information and predict test data with
    probability estimates

  • 相关阅读:
    jq ajax之beforesend(XHR)
    WPF string,color,brush之间的转换
    wpf后台设置颜色(背景色,前景色)
    VMWare之——宿主机与虚拟机互相ping通,宿主机ping通另一台机器的虚拟机
    动态SQL的执行,注:exec sp_executesql 其实可以实现参数查询和输出参数的
    【SQLSERVER】动态游标的实现
    减小Delphi 2010/delphi XE编译出来的文件大小
    正确理解 SqlConnection 的连接池机制[转]
    关于VS2005中C#代码用F12转到定义时,总是显示从元数据的问题
    通过cmd命令安装、卸载、启动和停止Windows Service(InstallUtil.exe)
  • 原文地址:https://www.cnblogs.com/banbana88/p/2418534.html
Copyright © 2020-2023  润新知