• 【458】keras 文本向量化 Vectorization


    相关类与方法说明:

    • from keras.preprocessing.text import Tokenizer
    • Tokenizer:文本标记实用类。该类允许使用两种方法向量化一个文本语料库: 将每个文本转化为一个整数序列(每个整数都是词典中标记的索引); 或者将其转化为一个向量,其中每个标记的系数可以是二进制值、词频、TF-IDF权重等。
      • num_words: 需要保留的最大词数,基于词频。只有最常出现的 num_words 词会被保留。
    • tokenizer.fit_on_texts():Updates internal vocabulary based on a list of texts.
    • tokenizer.texts_to_sequences():Transforms each text in texts in a sequence of integers. Only top "num_words" most frequent words will be taken into account. Only words known by the tokenizer will be taken into account.
    • tokenizer.word_index:dict {word: index}.
      
      
    import os
    imdb_dir = r"D:Deep LearningDataIMDBaclImdbaclImdb"
    train_dir = os.path.join(imdb_dir, 'train')
    
    labels = []
    texts = []
    
    for label_type in ['neg', 'pos']:
        dir_name = os.path.join(train_dir, label_type)
        for fname in os.listdir(dir_name):
            if fname[-4:] == '.txt':
                f = open(os.path.join(dir_name, fname), encoding='UTF-8')
                texts.append(f.read())
                f.close()
                if label_type == 'neg':
                    labels.append(0)
                else:
                    labels.append(1)
    
    from keras.preprocessing.text import Tokenizer
    from keras.preprocessing.sequence import pad_sequences
    import numpy as np
    
    maxlen = 100
    training_samples = 200
    validation_samples = 10000
    max_words = 10000
    
    """
    Text tokenization utility class.
    
    This class allows to vectorize a text corpus, by turning each
    text into either a sequence of integers (each integer being the index
    of a token in a dictionary) or into a vector where the coefficient
    for each token could be binary, based on word count, based on tf-idf...
    
    # Arguments
        num_words: the maximum number of words to keep, based
            on word frequency. Only the most common `num_words` words will
            be kept.
    """
    tokenizer = Tokenizer(num_words=max_words)
    # Updates internal vocabulary based on a list of texts.
    tokenizer.fit_on_texts(texts)
    # Transforms each text in texts in a sequence of integers.
    # Only top "num_words" most frequent words will be taken into account.
    # Only words known by the tokenizer will be taken into account.
    sequences = tokenizer.texts_to_sequences(texts)
    # dict {word: index}
    word_index = tokenizer.word_index
    
    print('Found %s unique tokens.' % len(word_index))
    
    data = pad_sequences(sequences, maxlen=maxlen)
    print('Shape of data tensor:', data.shape)
    
     
  • 相关阅读:
    Python使用mechanize模拟浏览器
    <五>读《《大话设计模式》》之工厂模式
    SQLite3基本使用从shell到python
    Android Monkey具体解释
    生女孩继续生,直到男孩,100年后
    android 仿EF看视频弹出练习功能
    秒针系统-中国领先的第三方营销数据技术公司
    凤凰男_百度百科
    基于Web的在线建模工具
    WSS与Project Server集成
  • 原文地址:https://www.cnblogs.com/alex-bn-lee/p/12293890.html
Copyright © 2020-2023  润新知