• 【文本分类-02】textCNN


    目录

    1. 大纲概述
    2. 数据集合
    3. 数据处理
    4. 预训练word2vec模型

    一、大纲概述

    文本分类这个系列将会有8篇左右文章,从github直接下载代码,从百度云下载训练数据,在pycharm上导入即可使用,包括基于word2vec预训练的文本分类,与及基于近几年的预训练模型(ELMo,BERT等)的文本分类。总共有以下系列:

    word2vec预训练词向量

    textCNN 模型

    charCNN 模型

    Bi-LSTM 模型

    Bi-LSTM + Attention 模型

    Transformer 模型

    ELMo 预训练模型

    BERT 预训练模型

       

    模型介绍:textCNN 模型结构

    textCNN 可以看作是n-grams的表现形式,textCNN介绍可以看这篇,论文Convolutional Neural Networks for Sentence Classification中提出的三种feature size的卷积核可以认为是对应了3-gram,4-gram和5-gram。整体模型结构如下,先用不同尺寸(3, 4, 5)的卷积核去提取特征,在进行最大池化,最后将不同尺寸的卷积核提取的特征拼接在一起作为输入到softmax中的特征向量。

    二、数据集合

    数据集为IMDB 电影影评,总共有三个数据文件,在/data/rawData目录下,包括unlabeledTrainData.tsv,labeledTrainData.tsv,testData.tsv。在进行文本分类时需要有标签的数据(labeledTrainData),但是在训练word2vec词向量模型(无监督学习)时可以将无标签的数据一起用上。

    训练数据地址:链接:https://pan.baidu.com/s/1-XEwx1ai8kkGsMagIFKX_g 提取码:rtz8

    三、主要代码 

    3.1 配置训练参数:parameter_config.py

        1 	#需要的所有导入包,存放留用,转换到jupyter后直接使用
        2 	# 1 配置训练参数
        3 	class TrainingConfig(object):
        4 	    epoches = 5
        5 	    evaluateEvery = 100
        6 	    checkpointEvery = 100
        7 	    learningRate = 0.001
        8 	class ModelConfig(object):
        9 	    embeddingSize = 200
       10 	    numFilters = 128
       11 	    filterSizes = [2, 3, 4, 5]
       12 	    dropoutKeepProb = 0.5
       13 	    l2RegLambda = 0.0
       14 	
       15 	#上面两个都放到Config一个类中,为了后期调用方便
       16 	class Config(object):
       17 	    sequenceLength = 200  # 取了所有序列长度的均值
       18 	    batchSize = 128
       19 	    dataSource = "../data/preProcess/labeledTrain.csv"
       20 	    stopWordSource = "../data/english"
       21 	    numClasses = 1  # 二分类设置为1,多分类设置为类别的数目
       22 	    rate = 0.8  # 训练集的比例
       23 	    training = TrainingConfig()
       24 	    model = ModelConfig()
       25 	# 实例化配置参数对象
       26 	# config = Config()

    3.2 获取训练数据:get_train_data.py

    1)将数据加载进来,将句子分割成词表示,并去除低频词和停用词。

    2)将词映射成索引表示,构建词汇-索引映射表,并保存成json的数据格式,之后做inference时可以用到。(注意,有的词可能不在word2vec的预训练词向量中,这种词直接用UNK表示)

    3)从预训练的词向量模型中读取出词向量,作为初始化值输入到模型中。

    4)将数据集分割成训练集和测试集

        1 	# Author:yifan
        2 	import json
        3 	from collections import Counter
        4 	import gensim
        5 	import pandas as pd
        6 	import numpy as np
        7 	import parameter_config
        8 	
        9 	# 2 数据预处理的类,生成训练集和测试集
       10 	class Dataset(object):
       11 	    def __init__(self, config):
       12 	        self.config = config
       13 	        self._dataSource = config.dataSource
       14 	        self._stopWordSource = config.stopWordSource
       15 	        self._sequenceLength = config.sequenceLength  # 每条输入的序列处理为定长
       16 	        self._embeddingSize = config.model.embeddingSize
       17 	        self._batchSize = config.batchSize
       18 	        self._rate = config.rate
       19 	        self._stopWordDict = {}
       20 	        self.trainReviews = []
       21 	        self.trainLabels = []
       22 	        self.evalReviews = []
       23 	        self.evalLabels = []
       24 	        self.wordEmbedding = None
       25 	        self.labelList = []
       26 	    def _readData(self, filePath):
       27 	        """
       28 	        从csv文件中读取数据集,就本次测试的文件做记录
       29 	        """
       30 	        df = pd.read_csv(filePath)  #读取文件,是三列的数据,第一列是review,第二列sentiment,第三列rate
       31 	        if self.config.numClasses == 1:
       32 	            labels = df["sentiment"].tolist()  #读取sentiment列的数据,  显示输出01序列数组25000条
       33 	        elif self.config.numClasses > 1:
       34 	            labels = df["rate"].tolist()   #因为numClasses控制,本次取样没有取超过二分类  该处没有输出
       35 	        review = df["review"].tolist()
       36 	        reviews = [line.strip().split() for line in review]  #按空格语句切分
       37 	        return reviews, labels
       38 	    def _labelToIndex(self, labels, label2idx):
       39 	        """
       40 	        将标签转换成索引表示
       41 	        """
       42 	        labelIds = [label2idx[label] for label in labels]   #print(labels==labelIds) 结果显示为true,也就是两个一样
       43 	        return labelIds
       44 	    def _wordToIndex(self, reviews, word2idx):
       45 	        """将词转换成索引"""
       46 	        reviewIds = [[word2idx.get(item, word2idx["UNK"]) for item in review] for review in reviews]
       47 	        # print(max(max(reviewIds)))
       48 	        # print(reviewIds)
       49 	        return reviewIds  #返回25000个无序的数组
       50 	    def _genTrainEvalData(self, x, y, word2idx, rate):
       51 	        """生成训练集和验证集 """
       52 	        reviews = []
       53 	        # print(self._sequenceLength)
       54 	        # print(len(x))
       55 	        for review in x:   #self._sequenceLength为200,表示长的切成200,短的补齐,x数据依旧是25000
       56 	            if len(review) >= self._sequenceLength:
       57 	                reviews.append(review[:self._sequenceLength])
       58 	            else:
       59 	                reviews.append(review + [word2idx["PAD"]] * (self._sequenceLength - len(review)))
       60 	                # print(len(review + [word2idx["PAD"]] * (self._sequenceLength - len(review))))
       61 	        #以下是按照rate比例切分训练和测试数据:
       62 	        trainIndex = int(len(x) * rate)
       63 	        trainReviews = np.asarray(reviews[:trainIndex], dtype="int64")
       64 	        trainLabels = np.array(y[:trainIndex], dtype="float32")
       65 	        evalReviews = np.asarray(reviews[trainIndex:], dtype="int64")
       66 	        evalLabels = np.array(y[trainIndex:], dtype="float32")
       67 	        return trainReviews, trainLabels, evalReviews, evalLabels
       68 	
       69 	    def _getWordEmbedding(self, words):
       70 	        """按照我们的数据集中的单词取出预训练好的word2vec中的词向量
       71 	        反馈词和对应的向量(200维度),另外前面增加PAD对用0的数组,UNK对应随机数组。
       72 	        """
       73 	        wordVec = gensim.models.KeyedVectors.load_word2vec_format("../word2vec/word2Vec.bin", binary=True)
       74 	        vocab = []
       75 	        wordEmbedding = []
       76 	        # 添加 "pad" 和 "UNK",
       77 	        vocab.append("PAD")
       78 	        vocab.append("UNK")
       79 	        wordEmbedding.append(np.zeros(self._embeddingSize))  # _embeddingSize 本文定义的是200
       80 	        wordEmbedding.append(np.random.randn(self._embeddingSize))
       81 	        # print(wordEmbedding)
       82 	        for word in words:
       83 	            try:
       84 	                vector = wordVec.wv[word]
       85 	                vocab.append(word)
       86 	                wordEmbedding.append(vector)
       87 	            except:
       88 	                print(word + "不存在于词向量中")
       89 	        # print(vocab[:3],wordEmbedding[:3])
       90 	        return vocab, np.array(wordEmbedding)
       91 	    def _genVocabulary(self, reviews, labels):
       92 	        """生成词向量和词汇-索引映射字典,可以用全数据集"""
       93 	        allWords = [word for review in reviews for word in review]   #单词数量5738236   reviews是25000个观点句子【】
       94 	        subWords = [word for word in allWords if word not in self.stopWordDict]   # 去掉停用词
       95 	        wordCount = Counter(subWords)  # 统计词频
       96 	        sortWordCount = sorted(wordCount.items(), key=lambda x: x[1], reverse=True) #返回键值对,并按照数量排序
       97 	        # print(len(sortWordCount))  #161330
       98 	        # print(sortWordCount[:4],sortWordCount[-4:]) # [('movie', 41104), ('film', 36981), ('one', 24966), ('like', 19490)] [('daeseleires', 1), ('nice310', 1), ('shortsightedness', 1), ('unfairness', 1)]
       99 	        words = [item[0] for item in sortWordCount if item[1] >= 5]   # 去除低频词,低于5的
      100 	        vocab, wordEmbedding = self._getWordEmbedding(words)
      101 	        self.wordEmbedding = wordEmbedding
      102 	        word2idx = dict(zip(vocab, list(range(len(vocab)))))   #生成类似这种{'I': 0, 'love': 1, 'yanzi': 2}
      103 	        uniqueLabel = list(set(labels))    #标签去重  最后就 0  1了
      104 	        label2idx = dict(zip(uniqueLabel, list(range(len(uniqueLabel))))) #本文就 {0: 0, 1: 1}
      105 	        self.labelList = list(range(len(uniqueLabel)))
      106 	        # 将词汇-索引映射表保存为json数据,之后做inference时直接加载来处理数据
      107 	        with open("../data/wordJson/word2idx.json", "w", encoding="utf-8") as f:
      108 	            json.dump(word2idx, f)
      109 	        with open("../data/wordJson/label2idx.json", "w", encoding="utf-8") as f:
      110 	            json.dump(label2idx, f)
      111 	        return word2idx, label2idx
      112 	
      113 	    def _readStopWord(self, stopWordPath):
      114 	        """
      115 	        读取停用词
      116 	        """
      117 	        with open(stopWordPath, "r") as f:
      118 	            stopWords = f.read()
      119 	            stopWordList = stopWords.splitlines()
      120 	            # 将停用词用列表的形式生成,之后查找停用词时会比较快
      121 	            self.stopWordDict = dict(zip(stopWordList, list(range(len(stopWordList)))))
      122 	
      123 	    def dataGen(self):
      124 	        """
      125 	        初始化训练集和验证集
      126 	        """
      127 	        # 初始化停用词
      128 	        self._readStopWord(self._stopWordSource)
      129 	        # 初始化数据集
      130 	        reviews, labels = self._readData(self._dataSource)
      131 	        # 初始化词汇-索引映射表和词向量矩阵
      132 	        word2idx, label2idx = self._genVocabulary(reviews, labels)
      133 	        # 将标签和句子数值化
      134 	        labelIds = self._labelToIndex(labels, label2idx)
      135 	        reviewIds = self._wordToIndex(reviews, word2idx)
      136 	        # 初始化训练集和测试集
      137 	        trainReviews, trainLabels, evalReviews, evalLabels = self._genTrainEvalData(reviewIds, labelIds, word2idx,
      138 	                                                                                    self._rate)
      139 	        self.trainReviews = trainReviews
      140 	        self.trainLabels = trainLabels
      141 	
      142 	        self.evalReviews = evalReviews
      143 	        self.evalLabels = evalLabels
      144  	
    	#获取前些模块的数据
    	# config =parameter_config.Config()
    	# data = Dataset(config)
    	# data.dataGen()

    3.3 模型构建:mode_structure.py

    # Author:yifan
    
    import tensorflow as tf
    import parameter_config
    
    # 构建模型  3 textCNN 模型
    class TextCNN(object):
        """
        Text CNN 用于文本分类
        """
        def __init__(self, config, wordEmbedding):
            # 定义模型的输入
            self.inputX = tf.placeholder(tf.int32, [None, config.sequenceLength], name="inputX") #占据【不定,200】的矩阵
            self.inputY = tf.placeholder(tf.int32, [None], name="inputY")
            self.dropoutKeepProb = tf.placeholder(tf.float32, name="dropoutKeepProb")
            # 定义l2损失
            l2Loss = tf.constant(0.0)
    
            # 词嵌入层
            #词对应的向量wordEmbedding(200维度),另外前面增加PAD对用0的数组,UNK对应随机数组。
            with tf.name_scope("embedding"):
                # 利用预训练的词向量初始化词嵌入矩阵
                self.W = tf.Variable(tf.cast(wordEmbedding, dtype=tf.float32, name="word2vec"), name="W")
                # 利用词嵌入矩阵将输入的数据中的词转换成词向量,维度[batch_size, sequence_length, embedding_size]
                self.embeddedWords = tf.nn.embedding_lookup(self.W, self.inputX)
                # 卷积的输入是四维[batch_size, width, height, channel],因此需要增加维度,用tf.expand_dims来增大维度(通道数)
                self.embeddedWordsExpanded = tf.expand_dims(self.embeddedWords, -1)
    
            # 创建卷积和池化层
            pooledOutputs = []
            # 有三种size的filter,3, 4, 5,textCNN是个多通道单层卷积的模型,可以看作三个单层的卷积模型的融合
            for i, filterSize in enumerate(config.model.filterSizes):
                with tf.name_scope("conv-maxpool-%s" % filterSize):
                    # 卷积层,卷积核尺寸为 filterSize * embeddingSize,卷积核的个数为numFilters  filterSizes= [2, 3, 4, 5]
                    # 初始化权重矩阵和偏置
                    filterShape = [filterSize, config.model.embeddingSize, 1, config.model.numFilters]
                    W = tf.Variable(tf.truncated_normal(filterShape, stddev=0.1), name="W")
                    b = tf.Variable(tf.constant(0.1, shape=[config.model.numFilters]), name="b")
                    conv = tf.nn.conv2d(
                        self.embeddedWordsExpanded,
                        W,
                        strides=[1, 1, 1, 1],
                        padding="VALID",
                        name="conv")
                    # relu函数的非线性映射
                    h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
    
                    # 池化层,最大池化,池化是对卷积后的序列取一个最大值
                    pooled = tf.nn.max_pool(
                        h,
                        ksize=[1, config.sequenceLength - filterSize + 1, 1, 1],
                        # ksize shape: [batch, height, width, channels]
                        strides=[1, 1, 1, 1],
                        padding='VALID',
                        name="pool")
                    pooledOutputs.append(pooled)  # 将三种size的filter的输出一起加入到列表中
    
            # 得到CNN网络的输出长度
            numFiltersTotal = config.model.numFilters * len(config.model.filterSizes)   #128*filterSizes的个数
    
            # 池化后的维度不变,按照最后的维度channel来concat
            self.hPool = tf.concat(pooledOutputs, 3)
    
            # 摊平成二维的数据输入到全连接层
            self.hPoolFlat = tf.reshape(self.hPool, [-1, numFiltersTotal])
    
            # dropout
            with tf.name_scope("dropout"):
                self.hDrop = tf.nn.dropout(self.hPoolFlat, self.dropoutKeepProb)
    
            # 全连接层的输出
            with tf.name_scope("output"):   #predice.py的graph.get_tensor_by_name("output/predictions:0")用到调用
                outputW = tf.get_variable(
                    "outputW",
                    shape=[numFiltersTotal, config.numClasses],
                    initializer=tf.contrib.layers.xavier_initializer())
                outputB = tf.Variable(tf.constant(0.1, shape=[config.numClasses]), name="outputB")
                l2Loss += tf.nn.l2_loss(outputW)
                l2Loss += tf.nn.l2_loss(outputB)
                self.logits = tf.nn.xw_plus_b(self.hDrop, outputW, outputB, name="logits")
                if config.numClasses == 1:
                    self.predictions = tf.cast(tf.greater_equal(self.logits, 0.0), tf.int32, name="predictions")
                elif config.numClasses > 1:
                    self.predictions = tf.argmax(self.logits, axis=-1, name="predictions")
                print(self.predictions)
    
            # 计算二元交叉熵损失
            with tf.name_scope("loss"):
                if config.numClasses == 1:
                    losses = tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits,   #二分类用sigmoid
                                                                     labels=tf.cast(tf.reshape(self.inputY, [-1, 1]),
                                                                     dtype=tf.float32))
                elif config.numClasses > 1:
                    #多分类使用softmax
                    losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.logits, labels=self.inputY)
    
                self.loss = tf.reduce_mean(losses) + config.model.l2RegLambda * l2Loss

    3.4 模型训练:mode_trainning.py

    import os
    import datetime
    import numpy as np
    import tensorflow as tf
    import parameter_config
    import get_train_data
    import mode_structure
    
    #获取前些模块的数据
    config =parameter_config.Config()
    data = get_train_data.Dataset(config)
    data.dataGen()
    
    #4生成batch数据集
    def nextBatch(x, y, batchSize):
        # 生成batch数据集,用生成器的方式输出
        perm = np.arange(len(x))  #返回[0  1  2  ... len(x)]的数组
        np.random.shuffle(perm)  #乱序
        x = x[perm]
        y = y[perm]
        numBatches = len(x) // batchSize
    
        for i in range(numBatches):
            start = i * batchSize
            end = start + batchSize
            batchX = np.array(x[start: end], dtype="int64")
            batchY = np.array(y[start: end], dtype="float32")
            yield batchX, batchY
    
    # 5 定义计算metrics的函数
    """
    定义各类性能指标
    """
    def mean(item: list) -> float:
        """
        计算列表中元素的平均值
        :param item: 列表对象
        :return:
        """
        res = sum(item) / len(item) if len(item) > 0 else 0
        return res
    
    def accuracy(pred_y, true_y):
        """
        计算二类和多类的准确率
        :param pred_y: 预测结果
        :param true_y: 真实结果
        :return:
        """
        if isinstance(pred_y[0], list):
            pred_y = [item[0] for item in pred_y]
        corr = 0
        for i in range(len(pred_y)):
            if pred_y[i] == true_y[i]:
                corr += 1
        acc = corr / len(pred_y) if len(pred_y) > 0 else 0
        return acc
    
    def binary_precision(pred_y, true_y, positive=1):
        """
        二类的精确率计算
        :param pred_y: 预测结果
        :param true_y: 真实结果
        :param positive: 正例的索引表示
        :return:
        """
        corr = 0
        pred_corr = 0
        for i in range(len(pred_y)):
            if pred_y[i] == positive:
                pred_corr += 1
                if pred_y[i] == true_y[i]:
                    corr += 1
    
        prec = corr / pred_corr if pred_corr > 0 else 0
        return prec
    
    def binary_recall(pred_y, true_y, positive=1):
        """
        二类的召回率
        :param pred_y: 预测结果
        :param true_y: 真实结果
        :param positive: 正例的索引表示
        :return:
        """
        corr = 0
        true_corr = 0
        for i in range(len(pred_y)):
            if true_y[i] == positive:
                true_corr += 1
                if pred_y[i] == true_y[i]:
                    corr += 1
    
        rec = corr / true_corr if true_corr > 0 else 0
        return rec
    
    def binary_f_beta(pred_y, true_y, beta=1.0, positive=1):
        """
        二类的f beta值
        :param pred_y: 预测结果
        :param true_y: 真实结果
        :param beta: beta值
        :param positive: 正例的索引表示
        :return:
        """
        precision = binary_precision(pred_y, true_y, positive)
        recall = binary_recall(pred_y, true_y, positive)
        try:
            f_b = (1 + beta * beta) * precision * recall / (beta * beta * precision + recall)
        except:
            f_b = 0
        return f_b
    
    def multi_precision(pred_y, true_y, labels):
        """
        多类的精确率
        :param pred_y: 预测结果
        :param true_y: 真实结果
        :param labels: 标签列表
        :return:
        """
        if isinstance(pred_y[0], list):
            pred_y = [item[0] for item in pred_y]
    
        precisions = [binary_precision(pred_y, true_y, label) for label in labels]
        prec = mean(precisions)
        return prec
    
    def multi_recall(pred_y, true_y, labels):
        """
        多类的召回率
        :param pred_y: 预测结果
        :param true_y: 真实结果
        :param labels: 标签列表
        :return:
        """
        if isinstance(pred_y[0], list):
            pred_y = [item[0] for item in pred_y]
    
        recalls = [binary_recall(pred_y, true_y, label) for label in labels]
        rec = mean(recalls)
        return rec
    
    def multi_f_beta(pred_y, true_y, labels, beta=1.0):
        """
        多类的f beta值
        :param pred_y: 预测结果
        :param true_y: 真实结果
        :param labels: 标签列表
        :param beta: beta值
        :return:
        """
        if isinstance(pred_y[0], list):
            pred_y = [item[0] for item in pred_y]
    
        f_betas = [binary_f_beta(pred_y, true_y, beta, label) for label in labels]
        f_beta = mean(f_betas)
        return f_beta
    
    def get_binary_metrics(pred_y, true_y, f_beta=1.0):
        """
        得到二分类的性能指标
        :param pred_y:
        :param true_y:
        :param f_beta:
        :return:
        """
        acc = accuracy(pred_y, true_y)
        recall = binary_recall(pred_y, true_y)
        precision = binary_precision(pred_y, true_y)
        f_beta = binary_f_beta(pred_y, true_y, f_beta)
        return acc, recall, precision, f_beta
    
    def get_multi_metrics(pred_y, true_y, labels, f_beta=1.0):
        """
        得到多分类的性能指标
        :param pred_y:
        :param true_y:
        :param labels:
        :param f_beta:
        :return:
        """
        acc = accuracy(pred_y, true_y)
        recall = multi_recall(pred_y, true_y, labels)
        precision = multi_precision(pred_y, true_y, labels)
        f_beta = multi_f_beta(pred_y, true_y, labels, f_beta)
        return acc, recall, precision, f_beta
    
    # 6 训练模型
    # 生成训练集和验证集
    trainReviews = data.trainReviews
    trainLabels = data.trainLabels
    evalReviews = data.evalReviews
    evalLabels = data.evalLabels
    
    wordEmbedding = data.wordEmbedding
    labelList = data.labelList
    
    # 定义计算图
    with tf.Graph().as_default():
        session_conf = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)
        session_conf.gpu_options.allow_growth = True
        session_conf.gpu_options.per_process_gpu_memory_fraction = 0.9  # 配置gpu占用率
        sess = tf.Session(config=session_conf)
    
        # 定义会话
        with sess.as_default():
            # cnn = textCNN.TextCNN(config, wordEmbedding)
            cnn = mode_structure.TextCNN(config, wordEmbedding)     #调用之前的模型结构
            globalStep = tf.Variable(0, name="globalStep", trainable=False)
            # 定义优化函数,传入学习速率参数
            optimizer = tf.train.AdamOptimizer(config.training.learningRate)
            # 计算梯度,得到梯度和变量
            gradsAndVars = optimizer.compute_gradients(cnn.loss)
            # 将梯度应用到变量下,生成训练器
            trainOp = optimizer.apply_gradients(gradsAndVars, global_step=globalStep)
    
            # 用summary绘制tensorBoard
            gradSummaries = []
            for g, v in gradsAndVars:
                if g is not None:
                    tf.summary.histogram("{}/grad/hist".format(v.name), g)
                    tf.summary.scalar("{}/grad/sparsity".format(v.name), tf.nn.zero_fraction(g))
    
            outDir = os.path.abspath(os.path.join(os.path.curdir, "summarys"))
            print("Writing to {}
    ".format(outDir))
    
            trainSummaryDir = os.path.join(outDir, "train")
            trainSummaryWriter = tf.summary.FileWriter(trainSummaryDir, sess.graph)
            evalSummaryDir = os.path.join(outDir, "eval")
            evalSummaryWriter = tf.summary.FileWriter(evalSummaryDir, sess.graph)
    
            lossSummary = tf.summary.scalar("loss", cnn.loss)
            summaryOp = tf.summary.merge_all()
    
            # 初始化所有变量
            saver = tf.train.Saver(tf.global_variables(), max_to_keep=5)  #保存5个模型
            # 保存模型的一种方式,保存为pb文件
            savedModelPath = "../model/textCNN/savedModel"
            if os.path.exists(savedModelPath):
                os.rmdir(savedModelPath)
            builder = tf.saved_model.builder.SavedModelBuilder(savedModelPath)
            sess.run(tf.global_variables_initializer())
    
    
            def trainStep(batchX, batchY):
                """
                训练函数
                """
                feed_dict = {
                    cnn.inputX: batchX,
                    cnn.inputY: batchY,
                    cnn.dropoutKeepProb: config.model.dropoutKeepProb
                }
                _, summary, step, loss, predictions = sess.run(
                    [trainOp, summaryOp, globalStep, cnn.loss, cnn.predictions],
                    feed_dict)
                timeStr = datetime.datetime.now().isoformat()
    
                if config.numClasses == 1:
                    acc, recall, prec, f_beta = get_binary_metrics(pred_y=predictions, true_y=batchY)
                elif config.numClasses > 1:
                    acc, recall, prec, f_beta = get_multi_metrics(pred_y=predictions, true_y=batchY,
                                                                  labels=labelList)
                trainSummaryWriter.add_summary(summary, step)
                return loss, acc, prec, recall, f_beta
    
            def devStep(batchX, batchY):
                """
                验证函数
                """
                feed_dict = {
                    cnn.inputX: batchX,
                    cnn.inputY: batchY,
                    cnn.dropoutKeepProb: 1.0
                }
                summary, step, loss, predictions = sess.run(
                    [summaryOp, globalStep, cnn.loss, cnn.predictions],
                    feed_dict)
    
                if config.numClasses == 1:
    
                    acc, precision, recall, f_beta = get_binary_metrics(pred_y=predictions, true_y=batchY)
                elif config.numClasses > 1:
                    acc, precision, recall, f_beta = get_multi_metrics(pred_y=predictions, true_y=batchY, labels=labelList)
    
                evalSummaryWriter.add_summary(summary, step)
    
                return loss, acc, precision, recall, f_beta
    
            for i in range(config.training.epoches):
                # 训练模型
                print("start training model")
                for batchTrain in nextBatch(trainReviews, trainLabels, config.batchSize):
                    loss, acc, prec, recall, f_beta = trainStep(batchTrain[0], batchTrain[1])
    
                    currentStep = tf.train.global_step(sess, globalStep)
                    print("train: step: {}, loss: {}, acc: {}, recall: {}, precision: {}, f_beta: {}".format(
                        currentStep, loss, acc, recall, prec, f_beta))
                    if currentStep % config.training.evaluateEvery == 0:
                        print("
    Evaluation:")
                        losses = []
                        accs = []
                        f_betas = []
                        precisions = []
                        recalls = []
    
                        for batchEval in nextBatch(evalReviews, evalLabels, config.batchSize):
                            loss, acc, precision, recall, f_beta = devStep(batchEval[0], batchEval[1])
                            losses.append(loss)
                            accs.append(acc)
                            f_betas.append(f_beta)
                            precisions.append(precision)
                            recalls.append(recall)
    
                        time_str = datetime.datetime.now().isoformat()
                        print("{}, step: {}, loss: {}, acc: {},precision: {}, recall: {}, f_beta: {}".format(time_str,
                                                                                                      currentStep,
                                                                                                          mean(losses),
                                                                                                             mean(accs),
                                                                                                             mean(
                                                                                                                 precisions),
                                                                                                             mean(recalls),
                                                                                                             mean(f_betas)))
    
                    if currentStep % config.training.checkpointEvery == 0:
                        # 保存模型的另一种方法,保存checkpoint文件
                        path = saver.save(sess, "../model/textCNN/model/my-model", global_step=currentStep)
                        print("Saved model checkpoint to {}
    ".format(path))
    
            inputs = {"inputX": tf.saved_model.utils.build_tensor_info(cnn.inputX),
                      "keepProb": tf.saved_model.utils.build_tensor_info(cnn.dropoutKeepProb)}
    
            outputs = {"predictions": tf.saved_model.utils.build_tensor_info(cnn.predictions)}
    
            prediction_signature = tf.saved_model.signature_def_utils.build_signature_def(inputs=inputs, outputs=outputs,
                                                                                          method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
            legacy_init_op = tf.group(tf.tables_initializer(), name="legacy_init_op")
            builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING],
                                                 signature_def_map={"predict": prediction_signature},
                                                 legacy_init_op=legacy_init_op)
    
            builder.save()

    3.5 预测:predict.py

    import os
    import csv
    import time
    import datetime
    import random
    import json
    from collections import Counter
    from math import sqrt
    import gensim
    import pandas as pd
    import numpy as np
    import tensorflow as tf
    from sklearn.metrics import roc_auc_score, accuracy_score, precision_score, recall_score
    import parameter_config
    config =parameter_config.Config()
    
    #7预测代码
    x = "this movie is full of references like mad max ii the wild one and many others the ladybug´s face it´s a clear reference or tribute to peter lorre this movie is a masterpiece we´ll talk much more about in the future"
    # x = "his movie is the same as the third level movie. There's no place to look good"
    # x = "This film is not good"   #最终反馈为0
    # x = "This film is   bad"   #最终反馈为0
    
    # 注:下面两个词典要保证和当前加载的模型对应的词典是一致的
    with open("../data/wordJson/word2idx.json", "r", encoding="utf-8") as f:
        word2idx = json.load(f)
    with open("../data/wordJson/label2idx.json", "r", encoding="utf-8") as f:
        label2idx = json.load(f)
    idx2label = {value: key for key, value in label2idx.items()}
    
    #x 的处理,变成模型能识别的向量xIds
    xIds = [word2idx.get(item, word2idx["UNK"]) for item in x.split(" ")]  #返回x对应的向量
    if len(xIds) >= config.sequenceLength:   #xIds 句子单词个数是否超过了sequenceLength(200)
        xIds = xIds[:config.sequenceLength]
        # print("ddd",xIds)
    else:
        xIds = xIds + [word2idx["PAD"]] * (config.sequenceLength - len(xIds))
        # print("xxx", xIds)
    
    graph = tf.Graph()
    with graph.as_default():
        gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
        session_conf = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False, gpu_options=gpu_options)
        sess = tf.Session(config=session_conf)
    
        with sess.as_default():
            # 恢复模型
            checkpoint_file = tf.train.latest_checkpoint("../model/textCNN/model/")
            saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
            saver.restore(sess, checkpoint_file)
            # 获得需要喂给模型的参数,输出的结果依赖的输入值
            inputX = graph.get_operation_by_name("inputX").outputs[0]
            dropoutKeepProb = graph.get_operation_by_name("dropoutKeepProb").outputs[0]
            # 获得输出的结果
            predictions = graph.get_tensor_by_name("output/predictions:0")
            pred = sess.run(predictions, feed_dict={inputX: [xIds], dropoutKeepProb: 1.0})[0]
    
    # print(pred)
    pred = [idx2label[item] for item in pred]
    print(pred)

    结果

    相关代码可见:https://github.com/yifanhunter/NLP_textClassifier

    主要参考:

    【1】 https://home.cnblogs.com/u/jiangxinyang/

  • 相关阅读:
    javascript中的继承实现
    【414】Code::Blocks增加主题
    【413】C 语言 Command line
    【412】Linux 系统编译 C 程序
    【411】COMP9024 Assignment1 问题汇总
    【410】Linux 系统 makefile 文件
    Solr的入门知识
    Java8新特性
    Linux命令大全
    为博客园博文添加目录的两种方法
  • 原文地址:https://www.cnblogs.com/yifanrensheng/p/13363400.html
Copyright © 2020-2023  润新知