• DAN 文本分类


    源自该博客 https://www.it610.com/article/1294029944149581824.htm

    代码在博客作者的github。从代码层面进行梳理

    一、Dataset

    class TorchDataset(Dataset):
        def __init__(self, word, label):
            self.word = word
            self.label = label
    
        def __getitem__(self, item):
            return self.word[item], self.label[item]
    
        def __len__(self):
            return len(self.word)
    

    固定的格式:实现3个方法

    二、DataLoader

     def get_data_loader(self):
        data_sets = dict()
        data_sets['train'] = DataLoader(TorchDataset(self.train_word, self.train_label),
                                        batch_size=self.args.batch_size,
                                        shuffle=True,
                                        collate_fn=self.__collate_fn)
      @staticmethod
        def __collate_fn(batch):
            """
            helper function to instantiate a DataLoader Object.
            """
    
            n_entity = len(batch[0]) # 列表中每一个元素为:word,label,故 n_entity=2
            modified_batch = [[] for _ in range(0, n_entity)] # [[], []] 用来保存batchsize个word和batchsize个label
    
            for idx in range(0, len(batch)): # (0,batchsize)
                for jdx in range(0, n_entity): # (0,2)
                    modified_batch[jdx].append(batch[idx][jdx]) # word列表和label列表分别添加对应的数据
    
            return modified_batch # [[[I,am,tom],[hello,word],..],[[1],[0],...]]
    

    实例化DataLoader类时,collate_fn参数的含义:
    merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.
    即:当样本集为字典类型的数据时,如何构造第n个batch样本。我们可以定义自己的函数来准确地实现想要的功能。
    函数输入为包含了batch_size个 (word,label)的列表,输出为 [[batchsize个x],[batchsize个y]]。
    如果不设置collate_fn,pytorch会使用默认的default_collate,此时,因为每个文本长度不同,会报错RuntimeError: each element in list of batch should be of equal size

    三、索引字典

    完成 “句子”到“标签列表”的转化
    3.1 SeqIndexerBase

    class SeqIndexerBase(object):
        """
        Storage and serialization a set of elements.
        """
    
        def __init__(self, name, if_use_pad, if_use_unk):
    
            self.__name = name # 自定义名字
            self.__if_use_pad = if_use_pad #未到指定长度的句子用"<PAD>"补齐
            self.__if_use_unk = if_use_unk # 未知的词用"<UNK>"代替
    
            self.__index2instance = OrderedSet()  # 有序集合,这里的每个单词的index,就等于其在单词:索引字典中的索引。
            self.__instance2index = OrderedDict()  # 有序字典,单词:索引
    
            # Counter Object record the frequency
            # of element occurs in raw text.
            self.__counter = Counter()
    
            if if_use_pad:
                self.__sign_pad = "<PAD>"
                self.add_instance(self.__sign_pad)
            if if_use_unk:
                self.__sign_unk = "<UNK>"
                self.add_instance(self.__sign_unk)
    
        @property
        def name(self):
            return self.__name
    
        def add_instance(self, instance):
            """ Add instances to alphabet.
    
            1, We support any iterative data structure which
            contains elements of str type.
    
            2, We will count added instances that will influence
            the serialization of unknown instance.
    
            :param instance: is given instance or a list of it.
            """
    
            if isinstance(instance, (list, tuple)):
                # 如果是输入是元组或者列表则递归调用,这样可以很简洁地同时加入多个词汇
                for element in instance:
                    self.add_instance(element)
                return
    
            # We only support elements of str type.
            assert isinstance(instance, str)
    
            # count the frequency of instances.
            self.__counter[instance] += 1
    
            if instance not in self.__index2instance:
                self.__instance2index[instance] = len(self.__index2instance)
                self.__index2instance.append(instance)
    
        def get_index(self, instance):
            """ Serialize given instance and return.
    
            For unknown words, the return index of alphabet
            depends on variable self.__use_unk:
    
                1, If True, then return the index of "<UNK>";
                2, If False, then return the index of the
                element that hold max frequency in training data.
    
            :param instance: is given instance or a list of it.
            :return: is the serialization of query instance.
            """
    
            if isinstance(instance, (list, tuple)):
                # 利用递归的技巧使得函数即支持单个元素,也支持列表构成的结构
                return [self.get_index(elem) for elem in instance]
    
            assert isinstance(instance, str)
    
            try:
                return self.__instance2index[instance]
            except KeyError:
                if self.__if_use_unk:
                    return self.__instance2index[self.__sign_unk]
                else:
                    max_freq_item = self.__counter.most_common(1)[0][0]
                    return self.__instance2index[max_freq_item]
    
        def get_instance(self, index):
            """ Get corresponding instance of query index.
    
            if index is invalid, then throws exception.
    
            :param index: is query index, possibly iterable.
            :return: is corresponding instance.
            """
    
            if isinstance(index, list):
                return [self.get_instance(elem) for elem in index]
    
            return self.__index2instance[index]
    
        def save_content(self, dir_path):
            """ Save the content of alphabet to files.
    
            There are two kinds of saved files:
                1, The first is a list file, elements are
                sorted by the frequency of occurrence.
    
                2, The second is a dictionary file, elements
                are sorted by it serialized index.
    
            :param dir_path: is the directory path to save object.
            """
    
            # Check if dir_path exists.
            if not os.path.exists(dir_path):
                os.mkdir(dir_path)
    
            list_path = os.path.join(dir_path, self.__name + "_list.txt")
            with open(list_path, 'w') as fw:
                for element, frequency in self.__counter.most_common():
                    fw.write(element + '\t' + str(frequency) + '\n')
    
            dict_path = os.path.join(dir_path, self.__name + "_dict.txt")
            with open(dict_path, 'w') as fw:
                for index, element in enumerate(self.__index2instance):
                    fw.write(element + '\t' + str(index) + '\n')
    
        def add_padding_tensor(self, texts, digital=False, gpu=-1):
            """
            因为在训练过程中,我们取出来的一个batch里面的各个句子的长度实际上是不尽相同的,
            所以我们需要将所有的句子的长度都padding到这些句子里面的最大值。
            0:PAD,1:UNK
            这就和LSTM的处理不同了
            :param texts: 一个batchsize大小的raw text列表
            :param digital: 默认为False,后面 if not digital=True,即 默认转换单词为索引
            """
            len_list = [len(text) for text in texts]
            max_len = max(len_list)
            if not digital:
                texts = self.get_index(texts)
            trans_texts= []
            mask = []
            for index in range(len(len_list)):
                trans_texts.append(deepcopy(texts[index]))
                mask.append([1] * len_list[index])
                mask[-1].extend([0] * (max_len - len_list[index]))
                # 使用0来填充每个样本,使长度为max_len
                trans_texts[-1].extend([0] * (max_len - len_list[index]))
    
            trans_texts = torch.LongTensor(trans_texts)
            mask = torch.LongTensor(mask)
            len_list = torch.LongTensor(len_list)
            if gpu > 0 :
                trans_texts = trans_texts.cuda(device=gpu)
                mask = mask.cuda(device=gpu)
                len_list = len_list.cuda(device=gpu)
            return trans_texts, len_list, mask
    
        def idx2tensor(self, indexes, gpu):
            indexes = torch.LongTensor(indexes)
            if gpu > 0:
                indexes = indexes.cuda(device=gpu)
            return indexes
    
        def instance2tensor(self, words, gpu):
            words = self.get_index(words)
            return self.idx2tensor(words, gpu)
    
        def tensor2idx(self, tensor, len_list:torch.Tensor):
            len_list = len_list.tolist()
            tensor = tensor.tolist()
            for i, x in enumerate(len_list):
                tensor[i] = tensor[i][:x]
            return tensor
    
        def __len__(self):
            return len(self.__index2instance)
    
        def __str__(self):
            return 'Alphabet {} contains about {} words: \n\t{}'.format(self.name, len(self), self.__index2instance)
    

    3.2 SeqIndexerBaseEmbeddings

    class SeqIndexerBaseEmbeddings(SeqIndexerBase):
        def __init__(self, name, embedding_path, emb_dim, emb_delimiter):
            super(SeqIndexerBaseEmbeddings, self).__init__(name=name, if_use_pad=True, if_use_unk=True)
            self.path = embedding_path
            self.embedding_vectors_list = list()
            self.emb_dim = emb_dim # 100
            self.emb_delimiter = emb_delimiter # ' '
            self.add_emb_vector(self.generate_zero_emb_vector())  # 添加一个长度为100的全0列表
            self.add_emb_vector(self.generate_random_emb_vector())  # 添加一个正态分布初始化的列表
    
        def load_embeddings_from_file(self):
            """
            真正加载glove向量,完成之后得到的向量列表:前两个为 pad对应的全0,UNK对应的正态分布初始化。
            """
            for k, line in tqdm(enumerate(open(self.path,encoding="utf-8"))):
                values = line.split(self.emb_delimiter)
                self.add_instance(values[0])
                emb_vector = list(map(lambda t: float(t), filter(lambda n: n and not n.isspace(), values[1:])))
                self.add_emb_vector(emb_vector)
    
        def generate_zero_emb_vector(self):
            if self.emb_dim == 0:
                raise ValueError('embeddings_dim is not known.')
            return [0 for _ in range(self.emb_dim)]
    
        def generate_random_emb_vector(self):
            if self.emb_dim == 0:
                raise ValueError('embeddings_dim is not known.')
            return np.random.uniform(-np.sqrt(3.0 / self.emb_dim), np.sqrt(3.0 / self.emb_dim),
                                     self.emb_dim).tolist()
    
        def add_emb_vector(self, emb_vector):
            self.embedding_vectors_list.append(emb_vector)
    
        def get_loaded_embeddings_tensor(self):
            return torch.FloatTensor(np.asarray(self.embedding_vectors_list))
    

    经过

     seq_indexer = SeqIndexerBaseEmbeddings("glove", args.embedding_dir, args.embedding_dim, ' ')
     seq_indexer.load_embeddings_from_file()
     label_indexer = SeqIndexerBase("laebl", False, False)
     label_indexer.add_instance(dataset.train_label)
    

    得到了每个单词对应的索引和对应的向量。标签对应的索引:0,1

    四、MLP类

    直接输出分类结果,不经过softmax变换

    class MLP(torch.nn.Module):
    
        def __init__(self, embedding_indexer: SeqIndexerBaseEmbeddings, gpu, feat_num, dropout):
            super(MLP, self).__init__()
            self.embedding = LayerWordEmbeddings(embedding_indexer)  # 嵌入层,输出为: shape: batch_size x max_seq_len x embed_dim
            self.linear1 = torch.nn.Linear(embedding_indexer.emb_dim, 50)  # 第一个隐藏层,输入维度 emb_dim,输出维度50
            self.linear2 = torch.nn.Linear(50, feat_num) # 第二个隐藏层 (50,2)
            self.dropout = torch.nn.Dropout(dropout)
            self.gpu = gpu
            if gpu > 0:
                self.cuda(device=gpu)
    
        def forward(self, words, lens:torch.Tensor, mask:torch.Tensor):
            words = self.embedding(words)  # (batch_size,max_seq_len,embed_dim)
            words = words * (mask.unsqueeze(-1).expand_as(words))  # (batch_size,max_seq_len,embed_dim)
            words = self.dropout(words)
            # 每个样本的所有单词向量求平均值,其中lens作为单词数
            # 对于三维tensor,dim=1即二维单词向量的y轴,下式sum为y轴求和,即所有单词向量求和,二维=>一维
            words = torch.sum(words, dim=1, keepdim=False) / lens.unsqueeze(-1) # torch.Size([batch_size, emb_dim])
            words = self.linear1(words)
            words = F.relu(words)
            words = self.dropout(words)
            words = self.linear2(words)
            return words
    
        def predict(self, texts, embedding_indexer: SeqIndexerBaseEmbeddings, label_indexer:SeqIndexerBase, batch_size):
    
            lens = len(texts)
            batch_num = (lens + batch_size - 1) // batch_size
            ans = []
            for i in range(batch_num) :
                start = i * batch_size
                end = min(start + batch_size, lens)
                part = texts[start:end]
                part, lengths, mask = embedding_indexer.add_padding_tensor(part, gpu=self.gpu)
                pred = self.forward(part, lengths, mask)
                pred = torch.argmax(pred, dim=-1, keepdim=False)
                pred = pred.tolist()
                pred = label_indexer.get_instance(pred)
                ans.extend(pred)
            return ans
    
    

    五、损失函数、优化器、评价指标

    criterion = torch.nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(model.parameters(),
                                     lr=0.001,
                                     betas=(0.9, 0.999),
                                     eps=1e-08,
                                     weight_decay=0,
                                     amsgrad=False)
    

    关于交叉熵损失

    从代码的角度上看,`torch.nn.CrossEntropyLoss()`的输出就是`log_softmax()`的值作为`nll_loss()`的输入得到
    

    使用F1作为预测评价指标

    class sst2F1Eval(object):
        @staticmethod
        def get_socre(predict, label):
            TP = 0
            FP = 0
            FN = 0
            for p, l in zip(predict, label):
                if p == l and p == '1':
                    TP += 1
                elif p == '1':
                    FP += 1
                elif l == '1':
                    FN += 1
            return 2 * TP / (2*TP + FP + FN)
    

    六、迭代训练

    数据、模型、损失函数、优化器定义完成之后,是固定步骤的训练,下面为每一个epoch的训练过程,遍历所有训练样本。

    for x, y in tqdm(train_loader):
        padded_text, lens, mask = seq_indexer.add_padding_tensor(x, gpu=args.gpu)
        label = label_indexer.instance2tensor(y, gpu=args.gpu)
        y = model(padded_text, lens, mask)
        print(y)
        exit(0)
        loss = criterion(y, label)
        loss.backward()
        optimizer.step()
        train_loss += loss.item()
        optimizer.zero_grad()
    
  • 相关阅读:
    powershell命令大全
    Android USB Connections Explained: MTP, PTP, and USB Mass Storage
    安装Windows Metasploit Framework
    Sublime Text2 jedi插件离线安装
    MySQL下载安装配置和Navicat for MySQL的安装配置
    Sublime中文编码问题
    Flask入门之结构重组(瘦身)-第13讲笔记
    Flask入门之SQLAlchemy配置与数据库连接
    Flask入门之flask-wtf表单处理
    Total Command使用笔记
  • 原文地址:https://www.cnblogs.com/leimu/p/15831404.html
Copyright © 2020-2023  润新知