• stacked generalization 堆积正则化 堆积泛化 加权特征线性堆积


    https://en.wikipedia.org/wiki/Ensemble_learning

    Stacking

    Stacking (sometimes called stacked generalization) involves training a learning algorithm to combine the predictions of several other learning algorithms. First, all of the other algorithms are trained using the available data, then a combiner algorithm is trained to make a final prediction using all the predictions of the other algorithms as additional inputs. If an arbitrary combiner algorithm is used, then stacking can theoretically represent any of the ensemble techniques described in this article, although in practice, a single-layer logistic regression model is often used as the combiner.

    Stacking typically yields performance better than any single one of the trained models.[22] It has been successfully used on both supervised learning tasks (regression,[23]classification and distance learning [24]) and unsupervised learning (density estimation).[25] It has also been used to estimate bagging's error rate.[3][26] It has been reported to out-perform Bayesian model-averaging.[27] The two top-performers in the Netflix competition utilized blending, which may be considered to be a form of stacking.[28]

    https://arxiv.org/pdf/0911.0460.pdf

    【显著提升协同过滤的准确性】

    Ensemble methods, such as stacking, are designed to boost predictive accuracy by blending the predictions of multiple machine learning models. Recent work has shown that the use of meta-features, additional inputs describing each example in a dataset, can boost the performance of ensemble methods, but the greatest reported gains have come from nonlinear procedures requiring significant tuning and training time. Here, we present a linear technique, Feature-Weighted Linear Stacking (FWLS), that incorporates meta-features for improved accuracy while retaining the well-known virtues of linear regression regarding speed, stability, and interpretability. FWLS combines model predictions linearly using coefficients that are themselves linear functions of meta-features. This technique was a key facet of the solution of the second place team in the recently concluded Netflix Prize competition. Significant increases in accuracy over standard linear stacking are demonstrated on the Netflix Prize collaborative filtering dataset.

    【a blend of blends - stacking--调和 混合 堆积 调和的调和 】

    “Stacking” is a technique in which the predictions of a collection of models are given as inputs to a second-level learning algorithm. This second-level algorithm is trained to combine the model predictions optimally to form a final set of predictions. Many machine learning practitioners have had success using stacking and related techniques to boost prediction accuracy beyond the level obtained by any of the individual models. In some contexts, stacking is also referred to as blending, and we will use the terms interchangeably here. Since its introduction [23], modellers have employed stacking successfuly on a wide variety of problems, including chemometrics [8], spam filtering [16], and large collections of datasets drawn from the UCI Machine learning repository [21, 7]. One prominent recent example of the

    power of model blending was the Netflix Prize1 collaborative filtering competition. The team BellKor’s Pragmatic Chaos won the $1 million prize using a blend of hundreds of different models [22, 11, 14]. Indeed, the winning solution was a blend at multiple levels, i.e., a blend of blends. Intuition suggests that the reliability of a model may vary as a function of the conditions in which it is used. For instance, in a collaborative filtering context where we wish to predict the preferences of customers for various products, the amount of data collected may vary significantly depending on which customer or which product is under consideration. Model A may be more reliable than model B for users who have rated many products, but model B may outperform model A for users who have only rated a few products. In an attempt to capitalize on this intuition, many researchers have developed approaches that attempt to improve the accuracy of stacked regression by adapting the blending on the basis of side information. Such an additional source of information, like the number of products rated by a user or the number of days since a product was released, is often referred to as a “meta-feature,” and we will use that terminology here.

  • 相关阅读:
    BZOJ5057 : 区间k小值5
    Urozero Autumn 2016. UKIEPC 2016
    BZOJ2808 : 那些年我们画格子
    BZOJ4970 : [ioi2004]empodia 障碍段
    XVII Open Cup named after E.V. Pankratiev. XXI Ural Championship
    BZOJ4316 : 小C的独立集
    网络流(3)——找到最小st-剪切
    网络流(2)——用Ford-Fullkerson算法寻找最大流
    网络流(1)——网络、流网络和网络流
    退而求其次(4)——椭圆中的最大矩形
  • 原文地址:https://www.cnblogs.com/rsapaper/p/7610419.html
Copyright © 2020-2023  润新知