• Feature extraction of sklearn


    Feature extraction

    https://scikit-learn.org/stable/modules/feature_extraction.html

         从文本或图片的数据集中提取出机器学习支持的数据格式。

    The sklearn.feature_extraction module can be used to extract features in a format supported by machine learning algorithms from datasets consisting of formats such as text and image.

          特征提取不同于特征选择, 特征提取存在数据变换行为, 从不同类型的数据中提取,变换为数值类型的特征。

         特征选择, 是在特征中进行选择最重要的特征子集,并排除噪音子集。

    Note

    Feature extraction is very different from Feature selection: the former consists in transforming arbitrary data, such as text or images, into numerical features usable for machine learning. The latter is a machine learning technique applied on these features.

    Loading features from dicts

         从词典数据类型中提取出词典值,作为特征向量。一些应用中经常存在此类词典数据,很是有用。

         对于分类型数据, 按照 one-hot编码原则。

    The class DictVectorizer can be used to convert feature arrays represented as lists of standard Python dict objects to the NumPy/SciPy representation used by scikit-learn estimators.

    While not particularly fast to process, Python’s dict has the advantages of being convenient to use, being sparse (absent features need not be stored) and storing feature names in addition to values.

    DictVectorizer implements what is called one-of-K or “one-hot” coding for categorical (aka nominal, discrete) features. Categorical features are “attribute-value” pairs where the value is restricted to a list of discrete of possibilities without ordering (e.g. topic identifiers, types of objects, tags, names…).

    In the following, “city” is a categorical attribute while “temperature” is a traditional numerical feature:

    >>> measurements = [
    ...     {'city': 'Dubai', 'temperature': 33.},
    ...     {'city': 'London', 'temperature': 12.},
    ...     {'city': 'San Francisco', 'temperature': 18.},
    ... ]
    
    >>> from sklearn.feature_extraction import DictVectorizer
    >>> vec = DictVectorizer()
    
    >>> vec.fit_transform(measurements).toarray()
    array([[ 1.,  0.,  0., 33.],
           [ 0.,  1.,  0., 12.],
           [ 0.,  0.,  1., 18.]])
    
    >>> vec.get_feature_names()
    ['city=Dubai', 'city=London', 'city=San Francisco', 'temperature']

    Feature hashing

          特征哈希工具, 是非常高速和低内存的向量化工具, 使用特征哈希技术。

         但是其不可逆。

    The class FeatureHasher is a high-speed, low-memory vectorizer that uses a technique known as feature hashing, or the “hashing trick”. Instead of building a hash table of the features encountered in training, as the vectorizers do, instances of FeatureHasher apply a hash function to the features to determine their column index in sample matrices directly. The result is increased speed and reduced memory usage, at the expense of inspectability; the hasher does not remember what the input features looked like and has no inverse_transform method.

         同时有冲突现象。

    Since the hash function might cause collisions between (unrelated) features, a signed hash function is used and the sign of the hash value determines the sign of the value stored in the output matrix for a feature. This way, collisions are likely to cancel out rather than accumulate error, and the expected mean of any output feature’s value is zero. This mechanism is enabled by default with alternate_sign=True and is particularly useful for small hash table sizes (n_features < 10000). For large hash table sizes, it can be disabled, to allow the output to be passed to estimators like MultinomialNB or chi2 feature selectors that expect non-negative inputs.

    https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.FeatureHasher.html#sklearn.feature_extraction.FeatureHasher

    Implements feature hashing, aka the hashing trick.

    This class turns sequences of symbolic feature names (strings) into scipy.sparse matrices, using a hash function to compute the matrix column corresponding to a name. The hash function employed is the signed 32-bit version of Murmurhash3.

    Feature names of type byte string are used as-is. Unicode strings are converted to UTF-8 first, but no Unicode normalization is done. Feature values must be (finite) numbers.

    This class is a low-memory alternative to DictVectorizer and CountVectorizer, intended for large-scale (online) learning and situations where memory is tight, e.g. when running prediction code on embedded devices.

    >>> from sklearn.feature_extraction import FeatureHasher
    >>> h = FeatureHasher(n_features=10)
    >>> D = [{'dog': 1, 'cat':2, 'elephant':4},{'dog': 2, 'run': 5}]
    >>> f = h.transform(D)
    >>> f.toarray()
    array([[ 0.,  0., -4., -1.,  0.,  0.,  0.,  0.,  0.,  2.],
           [ 0.,  0.,  0., -2., -5.,  0.,  0.,  0.,  0.,  0.]])

    Text feature extraction

    The Bag of Words representation

          词袋技术, 是将文档切分为单个的单词, 以单词为特征,统计单词在文档中出现次数,并进行正规化变换。

    Text Analysis is a major application field for machine learning algorithms. However the raw data, a sequence of symbols cannot be fed directly to the algorithms themselves as most of them expect numerical feature vectors with a fixed size rather than the raw text documents with variable length.

    In order to address this, scikit-learn provides utilities for the most common ways to extract numerical features from text content, namely:

    • tokenizing strings and giving an integer id for each possible token, for instance by using white-spaces and punctuation as token separators.

    • counting the occurrences of tokens in each document.

    • normalizing and weighting with diminishing importance tokens that occur in the majority of samples / documents.

    In this scheme, features and samples are defined as follows:

    • each individual token occurrence frequency (normalized or not) is treated as a feature.

    • the vector of all the token frequencies for a given document is considered a multivariate sample.

    A corpus of documents can thus be represented by a matrix with one row per document and one column per token (e.g. word) occurring in the corpus.

    We call vectorization the general process of turning a collection of text documents into numerical feature vectors. This specific strategy (tokenization, counting and normalization) is called the Bag of Words or “Bag of n-grams” representation. Documents are described by word occurrences while completely ignoring the relative position information of the words in the document.

    Bag of n-grams

    https://www.thefreedictionary.com/gram

        gram词根,表示写或者画的一个物件。

    -gram1

    ,

    a combining form meaning “something written, drawn, or plotted” ( diagram; epigram); “a written or drawn symbol or sequence of symbols” (ideogram; pentagram); “a message” (telegram); “an image or graphic record made by an instrument or as part of a diagnostic procedure” ( electrocardiogram).Compare -graph.
    [< Greek -gramma, comb. form of grámma something written or drawn]

    https://zhuanlan.zhihu.com/p/32829048

       N-gram模型克服了单个单词tokenize的缺点, 单个单词, 则不能保存单词前后之间的相关关系。

      N-gram允许取多个连续的单词。

    一、什么是n-gram模型

    N-Gram是一种基于统计语言模型的算法。它的基本思想是将文本里面的内容按照字节进行大小为N的滑动窗口操作,形成了长度是N的字节片段序列。

    每一个字节片段称为gram,对所有gram的出现频度进行统计,并且按照事先设定好的阈值进行过滤,形成关键gram列表,也就是这个文本的向量特征空间,列表中的每一种gram就是一个特征向量维度。

    该模型基于这样一种假设,第N个词的出现只与前面N-1个词相关,而与其它任何词都不相关,整句的概率就是各个词出现概率的乘积。这些概率可以通过直接从语料中统计N个词同时出现的次数得到。常用的是二元的Bi-Gram和三元的Tri-Gram。

    说完了n-gram模型的概念之后,下面讲解n-gram的一般应用。

    Sparsity

         词袋具有稀疏性,存储在稀疏数组中。

    As most documents will typically use a very small subset of the words used in the corpus, the resulting matrix will have many feature values that are zeros (typically more than 99% of them).

    For instance a collection of 10,000 short text documents (such as emails) will use a vocabulary with a size in the order of 100,000 unique words in total while each document will use 100 to 1000 unique words individually.

    In order to be able to store such a matrix in memory but also to speed up algebraic operations matrix / vector, implementations will typically use a sparse representation such as the implementations available in the scipy.sparse package.

    Common Vectorizer usage

        计数向量化 执行tokenize工作,并统计token的计数。

       通过 get_feature_names 获得token 特征集合。

      通过 vocabulary_.get('document') 来获得具体特征的 index

      对不存在的token,在tranform操作中忽略掉,结果设置为0

    CountVectorizer implements both tokenization and occurrence counting in a single class:

    >>>
    >>> from sklearn.feature_extraction.text import CountVectorizer
    

    This model has many parameters, however the default values are quite reasonable (please see the reference documentation for the details):

    >>>
    >>> vectorizer = CountVectorizer()
    >>> vectorizer
    CountVectorizer()
    

    Let’s use it to tokenize and count the word occurrences of a minimalistic corpus of text documents:

    >>>
    >>> corpus = [
    ...     'This is the first document.',
    ...     'This is the second second document.',
    ...     'And the third one.',
    ...     'Is this the first document?',
    ... ]
    >>> X = vectorizer.fit_transform(corpus)
    >>> X
    <4x9 sparse matrix of type '<... 'numpy.int64'>'
        with 19 stored elements in Compressed Sparse ... format>
    

    The default configuration tokenizes the string by extracting words of at least 2 letters. The specific function that does this step can be requested explicitly:

    >>>
    >>> analyze = vectorizer.build_analyzer()
    >>> analyze("This is a text document to analyze.") == (
    ...     ['this', 'is', 'text', 'document', 'to', 'analyze'])
    True
    

    Each term found by the analyzer during the fit is assigned a unique integer index corresponding to a column in the resulting matrix. This interpretation of the columns can be retrieved as follows:

    >>>
    >>> vectorizer.get_feature_names() == (
    ...     ['and', 'document', 'first', 'is', 'one',
    ...      'second', 'the', 'third', 'this'])
    True
    
    >>> X.toarray()
    array([[0, 1, 1, 1, 0, 0, 1, 0, 1],
           [0, 1, 0, 1, 0, 2, 1, 0, 1],
           [1, 0, 0, 0, 1, 0, 1, 1, 0],
           [0, 1, 1, 1, 0, 0, 1, 0, 1]]...)
    

    The converse mapping from feature name to column index is stored in the vocabulary_ attribute of the vectorizer:

    >>>
    >>> vectorizer.vocabulary_.get('document')
    1
    

    Hence words that were not seen in the training corpus will be completely ignored in future calls to the transform method:

    >>>
    >>> vectorizer.transform(['Something completely new.']).toarray()
    array([[0, 0, 0, 0, 0, 0, 0, 0, 0]]...)
    

    Note that in the previous corpus, the first and the last documents have exactly the same words hence are encoded in equal vectors. In particular we lose the information that the last document is an interrogative form. To preserve some of the local ordering information we can extract 2-grams of words in addition to the 1-grams (individual words):

    >>>
    bigram_vectorizer = CountVectorizer(ngram_range=(1, 2),
                                        token_pattern=r'w+', min_df=1)
    analyze = bigram_vectorizer.build_analyzer()
    analyze('Bi-grams are cool!') == (
        ['bi', 'grams', 'are', 'cool', 'bi grams', 'grams are', 'are cool'])
    
    

    The vocabulary extracted by this vectorizer is hence much bigger and can now resolve ambiguities encoded in local positioning patterns:

    >>>
    >>> X_2 = bigram_vectorizer.fit_transform(corpus).toarray()
    >>> X_2
    array([[0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0],
           [0, 0, 1, 0, 0, 1, 1, 0, 0, 2, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0],
           [1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0],
           [0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1]]...)
    

    In particular the interrogative form “Is this” is only present in the last document:

    >>>
    feature_index = bigram_vectorizer.vocabulary_.get('is this')
    X_2[:, feature_index]
    
    

    Tf–idf term weighting

         在单词计数基础上, 将单词的特征值, 转变为 在文档中出现的频率。 可以比对 长文 和 短文 之间的相似性。

        同时对于一些词, 在所有文档中都出现的情况,这种词提供的信息量很少, 是平凡项, 所以引入逆文档频率, 来稀释这些词的重要性。

    In a large text corpus, some words will be very present (e.g. “the”, “a”, “is” in English) hence carrying very little meaningful information about the actual contents of the document. If we were to feed the direct count data directly to a classifier those very frequent terms would shadow the frequencies of rarer yet more interesting terms.

    In order to re-weight the count features into floating point values suitable for usage by a classifier it is very common to use the tf–idf transform.

    >>> from sklearn.feature_extraction.text import TfidfTransformer
    >>> transformer = TfidfTransformer(smooth_idf=False)
    >>> transformer
    TfidfTransformer(smooth_idf=False)

    Let’s take an example with the following counts. The first term is present 100% of the time hence not very interesting. The two other features only in less than 50% of the time hence probably more representative of the content of the documents:

    >>> counts = [[3, 0, 1],
    ...           [2, 0, 0],
    ...           [3, 0, 0],
    ...           [4, 0, 0],
    ...           [3, 2, 0],
    ...           [3, 0, 2]]
    ...
    >>> tfidf = transformer.fit_transform(counts)
    >>> tfidf
    <6x3 sparse matrix of type '<... 'numpy.float64'>'
        with 9 stored elements in Compressed Sparse ... format>
    
    >>> tfidf.toarray()
    array([[0.81940995, 0.        , 0.57320793],
           [1.        , 0.        , 0.        ],
           [1.        , 0.        , 0.        ],
           [1.        , 0.        , 0.        ],
           [0.47330339, 0.88089948, 0.        ],
           [0.58149261, 0.        , 0.81355169]])

    逆文档系数

    >>> transformer.idf_
    array([1. ..., 2.25..., 1.84...])

    集成化的工具, 包括计数 和 词频化 和 逆文档化。

    As tf–idf is very often used for text features, there is also another class called TfidfVectorizer that combines all the options of CountVectorizer and TfidfTransformer in a single model:

    >>> from sklearn.feature_extraction.text import TfidfVectorizer
    >>> vectorizer = TfidfVectorizer()
    >>> vectorizer.fit_transform(corpus)
    <4x9 sparse matrix of type '<... 'numpy.float64'>'
        with 19 stored elements in Compressed Sparse ... format>

    Decoding text files

    Text is made of characters, but files are made of bytes. These bytes represent characters according to some encoding. To work with text files in Python, their bytes must be decoded to a character set called Unicode. Common encodings are ASCII, Latin-1 (Western Europe), KOI8-R (Russian) and the universal encodings UTF-8 and UTF-16. Many others exist.

    The text feature extractors in scikit-learn know how to decode text files, but only if you tell them what encoding the files are in. The CountVectorizer takes an encoding parameter for this purpose. For modern text files, the correct encoding is probably UTF-8, which is therefore the default (encoding="utf-8").

    import chardet    
    text1 = b"Sei mir gegrxc3xbcxc3x9ft mein Sauerkraut"
    text2 = b"holdselig sind deine Gerxfcche"
    text3 = b"xffxfeAx00ux00fx00 x00Fx00lx00xfcx00gx00ex00lx00nx00 x00dx00ex00sx00 x00Gx00ex00sx00ax00nx00gx00ex00sx00,x00 x00Hx00ex00rx00zx00lx00ix00ex00bx00cx00hx00ex00nx00,x00 x00tx00rx00ax00gx00 x00ix00cx00hx00 x00dx00ix00cx00hx00 x00fx00ox00rx00tx00"
    decoded = [x.decode(chardet.detect(x)['encoding'])
               for x in (text1, text2, text3)]        
    v = CountVectorizer().fit(decoded).vocabulary_    
    for term in v: print(v)      

    Applications and examples

    The bag of words representation is quite simplistic but surprisingly useful in practice.

    In particular in a supervised setting it can be successfully combined with fast and scalable linear models to train document classifiers, for instance:

    In an unsupervised setting it can be used to group similar documents together by applying clustering algorithms such as K-means:

    Finally it is possible to discover the main topics of a corpus by relaxing the hard assignment constraint of clustering, for instance by using Non-negative matrix factorization (NMF or NNMF):

    TfidfVectorizer

    https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html#sklearn.feature_extraction.text.TfidfVectorizer

    Convert a collection of raw documents to a matrix of TF-IDF features.

    Equivalent to CountVectorizer followed by TfidfTransformer.

    >>> from sklearn.feature_extraction.text import TfidfVectorizer
    >>> corpus = [
    ...     'This is the first document.',
    ...     'This document is the second document.',
    ...     'And this is the third one.',
    ...     'Is this the first document?',
    ... ]
    >>> vectorizer = TfidfVectorizer()
    >>> X = vectorizer.fit_transform(corpus)
    >>> print(vectorizer.get_feature_names())
    ['and', 'document', 'first', 'is', 'one', 'second', 'the', 'third', 'this']
    >>> print(X.shape)
    (4, 9)

    TfidfTransformer

    https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html#sklearn.feature_extraction.text.TfidfTransformer

    与计数向量化配合使用。

    Transform a count matrix to a normalized tf or tf-idf representation

    Tf means term-frequency while tf-idf means term-frequency times inverse document-frequency. This is a common term weighting scheme in information retrieval, that has also found good use in document classification.

    The goal of using tf-idf instead of the raw frequencies of occurrence of a token in a given document is to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus.

    The formula that is used to compute the tf-idf for a term t of a document d in a document set is tf-idf(t, d) = tf(t, d) * idf(t), and the idf is computed as idf(t) = log [ n / df(t) ] + 1 (if smooth_idf=False), where n is the total number of documents in the document set and df(t) is the document frequency of t; the document frequency is the number of documents in the document set that contain the term t. The effect of adding “1” to the idf in the equation above is that terms with zero idf, i.e., terms that occur in all documents in a training set, will not be entirely ignored. (Note that the idf formula above differs from the standard textbook notation that defines the idf as idf(t) = log [ n / (df(t) + 1) ]).

    If smooth_idf=True (the default), the constant “1” is added to the numerator and denominator of the idf as if an extra document was seen containing every term in the collection exactly once, which prevents zero divisions: idf(t) = log [ (1 + n) / (1 + df(t)) ] + 1.

    Furthermore, the formulas used to compute tf and idf depend on parameter settings that correspond to the SMART notation used in IR as follows:

    Tf is “n” (natural) by default, “l” (logarithmic) when sublinear_tf=True. Idf is “t” when use_idf is given, “n” (none) otherwise. Normalization is “c” (cosine) when norm='l2', “n” (none) when norm=None.

    >>> from sklearn.feature_extraction.text import TfidfTransformer
    >>> from sklearn.feature_extraction.text import CountVectorizer
    >>> from sklearn.pipeline import Pipeline
    >>> import numpy as np
    >>> corpus = ['this is the first document',
    ...           'this document is the second document',
    ...           'and this is the third one',
    ...           'is this the first document']
    >>> vocabulary = ['this', 'document', 'first', 'is', 'second', 'the',
    ...               'and', 'one']
    >>> pipe = Pipeline([('count', CountVectorizer(vocabulary=vocabulary)),
    ...                  ('tfid', TfidfTransformer())]).fit(corpus)
    >>> pipe['count'].transform(corpus).toarray()
    array([[1, 1, 1, 1, 0, 1, 0, 0],
           [1, 2, 0, 1, 1, 1, 0, 0],
           [1, 0, 0, 1, 0, 1, 1, 1],
           [1, 1, 1, 1, 0, 1, 0, 0]])
    >>> pipe['tfid'].idf_
    array([1.        , 1.22314355, 1.51082562, 1.        , 1.91629073,
           1.        , 1.91629073, 1.91629073])
    >>> pipe.transform(corpus).shape
    (4, 8)

    HashingVectorizer

    https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.HashingVectorizer.html#sklearn.feature_extraction.text.HashingVectorizer

    类似 tfidf 转换器, 这也是将文档转换为 token 特征矩阵, 只不过这里利用了 hash 计数, 将特征空间的维度可以在训练之前固定住。 有利于在线学习 和 处理大文档情况。

    Convert a collection of text documents to a matrix of token occurrences

    It turns a collection of text documents into a scipy.sparse matrix holding token occurrence counts (or binary occurrence information), possibly normalized as token frequencies if norm=’l1’ or projected on the euclidean unit sphere if norm=’l2’.

    This text vectorizer implementation uses the hashing trick to find the token string name to feature integer index mapping.

    This strategy has several advantages:

    • it is very low memory scalable to large datasets as there is no need to store a vocabulary dictionary in memory

    • it is fast to pickle and un-pickle as it holds no state besides the constructor parameters

    • it can be used in a streaming (partial fit) or parallel pipeline as there is no state computed during fit.

    There are also a couple of cons (vs using a CountVectorizer with an in-memory vocabulary):

    • there is no way to compute the inverse transform (from feature indices to string feature names) which can be a problem when trying to introspect which features are most important to a model.

    • there can be collisions: distinct tokens can be mapped to the same feature index. However in practice this is rarely an issue if n_features is large enough (e.g. 2 ** 18 for text classification problems).

    • no IDF weighting as this would render the transformer stateful.

    >>> from sklearn.feature_extraction.text import HashingVectorizer
    >>> corpus = [
    ...     'This is the first document.',
    ...     'This document is the second document.',
    ...     'And this is the third one.',
    ...     'Is this the first document?',
    ... ]
    >>> vectorizer = HashingVectorizer(n_features=2**4)
    >>> X = vectorizer.fit_transform(corpus)
    >>> print(X.shape)
    (4, 16)

    Image feature extraction

          类似图片中的套索工具, 可以选择特征相同的一片内容。 这个提取出的特征交patch,补丁。

         还可以从补丁中恢复原始图片。

    The extract_patches_2d function extracts patches from an image stored as a two-dimensional array, or three-dimensional with color information along the third axis. For rebuilding an image from all its patches, use reconstruct_from_patches_2d. For example let use generate a 4x4 pixel picture with 3 color channels (e.g. in RGB format):

    >>> import numpy as np
    >>> from sklearn.feature_extraction import image
    
    >>> one_image = np.arange(4 * 4 * 3).reshape((4, 4, 3))
    >>> one_image[:, :, 0]  # R channel of a fake RGB picture
    array([[ 0,  3,  6,  9],
           [12, 15, 18, 21],
           [24, 27, 30, 33],
           [36, 39, 42, 45]])
    
    >>> patches = image.extract_patches_2d(one_image, (2, 2), max_patches=2,
    ...     random_state=0)
    >>> patches.shape
    (2, 2, 2, 3)
    >>> patches[:, :, :, 0]
    array([[[ 0,  3],
            [12, 15]],
    
           [[15, 18],
            [27, 30]]])
    >>> patches = image.extract_patches_2d(one_image, (2, 2))
    >>> patches.shape
    (9, 2, 2, 3)
    >>> patches[4, :, :, 0]
    array([[15, 18],
           [27, 30]])

    Let us now try to reconstruct the original image from the patches by averaging on overlapping areas:

    >>>
    >>> reconstructed = image.reconstruct_from_patches_2d(patches, (4, 4, 3))
    >>> np.testing.assert_array_equal(one_image, reconstructed)
    

    The PatchExtractor class works in the same way as extract_patches_2d, only it supports multiple images as input. It is implemented as an estimator, so it can be used in pipelines. See:

    >>>
    >>> five_images = np.arange(5 * 4 * 4 * 3).reshape(5, 4, 4, 3)
    >>> patches = image.PatchExtractor(patch_size=(2, 2)).transform(five_images)
    >>> patches.shape
    (45, 2, 2, 3)
    
    出处:http://www.cnblogs.com/lightsong/ 本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接。
  • 相关阅读:
    希尔伯特空间
    Java基础之类型转换总结篇
    超实用在线编译网站,编辑器
    3269: 万水千山粽是情
    Problem A: 李白打酒
    2370: 圆周率
    C语言fmod()函数:对浮点数取模(求余)
    C语言exp()函数:e的次幂函数(以e为底的x次方值)
    2543: 数字整除
    2542: 弟弟的作业
  • 原文地址:https://www.cnblogs.com/lightsong/p/14308898.html
Copyright © 2020-2023  润新知