• 情感分析与主题抽取


    # 模拟业务场景
    reviews = [
        'It is an amazing movie.',
        'This is a dull movie. I would never recommend it to anyone.',
        'The cinematography is pretty great in this movie.',
        'The direction was terrible and the story was all over the place.']
    sents, probs = [], []
    for review in reviews:
        sample = {}
        words = review.split()
        for word in words:
            sample[word] = True
        pcls = model.classify(sample)
        print(review, '->', pcls)
    
    
    输出结果:
    0.735
    It is an amazing movie. -> POSITIVE
    This is a dull movie. I would never recommend it to anyone. -> NEGATIVE
    The cinematography is pretty great in this movie. -> POSITIVE
    The direction was terrible and the story was all over the place. -> NEGATIVE
    '''
    主题抽取:经过分词、单词清洗、词干提取后,基于TF-IDF算法可以抽取一段文本中的核心主题词汇,从而判断出当前文本的主题。
    属于无监督学习。gensim模块提供了主题抽取的常用工具 。
    主题抽取相关API:
    import gensim.models.ldamodel as gm
    import gensim.corpora as gc

    # 把lines_tokens中出现的单词都存入gc提供的词典对象,对每一个单词做编码。
    line_tokens = ['hello', 'world', ...]
    dic = gc.Dictionary(line_tokens)
    # 通过字典构建词袋
    bow = dic.doc2bow(line_tokens)

    # 构建LDA模型
    # bow: 词袋
    # num_topics: 分类数
    # id2word: 词典
    # passes: 每个主题保留的最大主题词个数
    model = gm.LdaModel(bow, num_topics=n_topics, id2word=dic, passes=25)
    # 输出每个类别中对类别贡献最大的4个主题词
    topics = model.print_topics(num_topics=n_topics, num_words=4)
    '''

    import nltk.tokenize as tk
    import nltk.corpus as nc
    import nltk.stem.snowball as sb
    import gensim.models.ldamodel as gm
    import gensim.corpora as gc
    doc = []
    with open('../ml_data/topic.txt', 'r') as f:
    for line in f.readlines():
    doc.append(line[:-1])
    tokenizer = tk.WordPunctTokenizer()
    stopwords = nc.stopwords.words('english')
    signs = [',', '.', '!']
    stemmer = sb.SnowballStemmer('english')
    lines_tokens = []
    for line in doc:
    tokens = tokenizer.tokenize(line.lower())
    line_tokens = []
    for token in tokens:
    if token not in stopwords and token not in signs:
    token = stemmer.stem(token)
    line_tokens.append(token)
    lines_tokens.append(line_tokens)
    # 把lines_tokens中出现的单词都存入gc提供的词典对象,对每一个单词做编码。
    dic = gc.Dictionary(lines_tokens)
    # 遍历每一行,构建词袋列表
    bow = []
    for line_tokens in lines_tokens:
    row = dic.doc2bow(line_tokens)
    bow.append(row)
    n_topics = 2
    # 通过词袋、分类数、词典、每个主题保留的最大主题词个数构建LDA模型
    model = gm.LdaModel(bow, num_topics=n_topics, id2word=dic, passes=25)
    # 输出每个类别中对类别贡献最大的4个主题词
    topics = model.print_topics(num_topics=n_topics, num_words=4)
    print(topics)
  • 相关阅读:
    2018.09.08什么是ajax
    2018.09.03怎样让网页自适应所有屏幕宽度
    2018.08.25字符串和二维数组之间的转换
    2018.08.20MySQL常用命令总结(二)
    2018.08.15解决MySQL1290问题
    2018.08.13MySQL常用命令总结(一)
    2018.08.11MySQL无法启动错误码1067的解决方法
    2018.08.10 css中position定位问题
    2018.08.10jQuery导航栏置顶
    2018.08.07css实现图片放大
  • 原文地址:https://www.cnblogs.com/yuxiangyang/p/11240475.html
Copyright © 2020-2023  润新知