• 记一次失败的kaggle比赛(1):赛题简单介绍与初次尝试




    题目描写叙述:https://www.kaggle.com/c/santander-customer-satisfaction

    简单总结:一堆匿名属性;label是0/1。目标是最大化AUC。


    第一次尝试:


    特征:

    因为时间比較充裕,直接用了暴力搜索提取较好的特征:

    #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! better use a RFC or GBC as the clf
    #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! because the final predict model are those two
    #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! we should select better feature for RFC or GBC, not for LR
    clf = LogisticRegression(class_weight='balanced', penalty='l2', n_jobs=-1)
    selectedFeaInds=GreedyFeatureAdd(clf, trainX, trainY, scoreType="auc", goodFeatures=[], maxFeaNum=150)
    joblib.dump(selectedFeaInds, 'modelPersistence/selectedFeaInds.pkl')
    #selectedFeaInds=joblib.load('modelPersistence/selectedFeaInds.pkl') 
    trainX=trainX[:,selectedFeaInds]
    testX=testX[:,selectedFeaInds]
    print trainX.shape


    模型:

    直接使用sklearn中最经常使用的三个模型:

    trainN=len(trainY)
    
    print "Creating train and test sets for blending..."
    #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! always use a seed for randomized procedures
    models=[
        RandomForestClassifier(n_estimators=1999, criterion='gini', n_jobs=-1, random_state=SEED),
        RandomForestClassifier(n_estimators=1999, criterion='entropy', n_jobs=-1, random_state=SEED),
        ExtraTreesClassifier(n_estimators=1999, criterion='gini', n_jobs=-1, random_state=SEED),
        ExtraTreesClassifier(n_estimators=1999, criterion='entropy', n_jobs=-1, random_state=SEED),
        GradientBoostingClassifier(learning_rate=0.1, n_estimators=101, subsample=0.6, max_depth=8, random_state=SEED)
    ]
    #StratifiedKFold is a variation of k-fold which returns stratified folds: each set contains approximately the same percentage of samples of each target class as the complete set.
    #kfcv=KFold(n=trainN, n_folds=nFold, shuffle=True, random_state=SEED)
    kfcv=StratifiedKFold(y=trainY, n_folds=nFold, shuffle=True, random_state=SEED)
    dataset_trainBlend=np.zeros( ( trainN, len(models) ) )
    dataset_testBlend=np.zeros( ( len(testX), len(models) ) )
    meanAUC=0.0
    for i, model in enumerate(models):
        print "model ", i, "=="*20
        dataset_testBlend_j=np.zeros( ( len(testX), nFold ) )
        for j, (trainI, testI) in enumerate(kfcv):
            print "Fold ", j, "^"*20



    终于结果:

    通过blending全部模型的结果:

    print "Blending models..."
    #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if we want to predict some real values, use RidgeCV
    model=LogisticRegression(n_jobs=-1)
    C=np.linspace(0.001,1.0,1000)
    trainAucList=[]
    for c in C:
        model.C=c
        model.fit(dataset_trainBlend,trainY)
        trainProba=model.predict_proba(dataset_trainBlend)[:,1]
        trainAuc=metrics.roc_auc_score(trainY, trainProba)
        trainAucList.append((trainAuc, c))
    sortedtrainAucList=sorted(trainAucList)
    for trainAuc, c in sortedtrainAucList:
        print "c=%f => trainAuc=%f" % (c, trainAuc)

    model.C=sortedtrainAucList[-1][1] #0.05
    model.fit(dataset_trainBlend,trainY)
    trainProba=model.predict_proba(dataset_trainBlend)[:,1]
    print "train auc: %f" % metrics.roc_auc_score(trainY, trainProba)  #0.821439
    print "model.coef_: ", model.coef_
    
    print "Predict and saving results..."
    submitProba=model.predict_proba(dataset_testBlend)[:,1]
    df=pd.DataFrame(submitProba)
    print df.describe()
    SaveFile(submitID, submitProba, fileName="1submit.csv")

    归一化:

    print "MinMaxScaler predictions to [0,1]..."
    mms=preprocessing.MinMaxScaler(feature_range=(0, 1))
    submitProba=mms.fit_transform(submitProba)
    df=pd.DataFrame(submitProba)
    print df.describe()
    SaveFile(submitID, submitProba, fileName="1submitScale.csv")




    从測试结果中总结经验:

    第一:暴力搜索特征的方式在特征数较多的情况下不可取。较少的情况下能够考虑(<200)

    第二:sklearn中的这几个模型,ExtraTreesClassifier效果最差,RandomForestClassifier效果较好且速度比較快,GradientBoostingClassifier结果最好但速度很慢(由于不能并行)

    第三:当某一个模型(GradientBoostingClassifier)比其它模型效果好非常多时,不要使用blending的方法(尤其是特征空间一样,分类器类似的情况。比方这里的五个分类器都在同一组特征上建模。并且都是基于树的分类器),由于blending往往会使总体效果低于单独使用最好的一个模型

    第四:对于AUC,实际上关心的是样本间的排名。而不是详细数值的大小,所以结果不是必需做归一化处理;关于这个结论。自行搜索资料理解



    持续更新兴许经验。欢迎关注^_^





  • 相关阅读:
    【转】UML中类与类之间的5种关系表示
    OSGI框架—HelloWorld小实例
    解决:“Workbench has not been created yet” error in eclipse plugin programming”,OSGI启动控制台报错问题
    Restful风格到底是什么?怎么应用到我们的项目中?
    Java程序员面试题集(1-50
    【转】Spring中@Component的作用
    【转】Spring AOP 实现原理与 CGLIB 应用
    【转】spring和springMVC的面试问题总结
    Java算法之“兔子问题”
    DDD创始人Eric Vans:要实现DDD原始意图,必须CQRS+Event Sourcing架构
  • 原文地址:https://www.cnblogs.com/wgwyanfs/p/7258362.html
Copyright © 2020-2023  润新知