• Theano3.7-练习之堆叠消噪自动编码器


    来自:http://deeplearning.net/tutorial/SdA.html#sda

    Stacked Denoising Autoencoders (SdA)

    note:这部分需要读者读过 Theano3.3-练习之逻辑回归)和(Theano3.4-练习之多层感知机)。另外会使用到的theano函数和概念: T.tanhshared variablesbasic arithmetic opsT.gradRandom numbers,floatX.如果你想将代码运行在GPU上,记得看看 GPU.

    note:这部分的代码下载地址 here.

       堆叠消噪自动编码器(Stacked Denoising Autoencoder,SdA)是堆叠自动编码器的一个扩展 [Bengio07] 在 [Vincent08]有所介绍。

        这个教程是建立在之前的一个教程 Denoising Autoencoders翻译上的。特别是如果你没有任何ae的经验的话,我们推荐你先读完之前的部分,再来这里。

    一、Stacked Autoencoders

     

        dae可以堆叠起来形成一个深层网络,只要将下一层dae的潜在表征(输出编码)前馈给当前层的输入就行了。这样一个结构的无监督预训练就是一次完成一层来实现的每一层都是作为一个独立的dae进行训练的,一旦前 k 层训练好了,那么就可以训练第 k+1 层了,因为我们现在可以计算下层的编码或者潜在表征了。

        一旦所有的层都预训练好了,该网络就可以进行到下一个阶段了:微调。这里说的是有监督微调,即最小化在一个有监督任务上的预测误差。 所以,我们先在这个网络的顶层(也就是输出层的输出编码上)增加一个逻辑回归层。然后训练整个网络就和我们训练一个多层感知机一样 。这时候,我们只考虑每个ae的编码部分。这个阶段是有监督的,因为现在在训练的阶段,我们会用到目标类别。 (见 Multilayer Perceptron 的多层感知机部分。)

        这可以在theano中很容易的实现,并重用之前对dae定义的类。我们可以看到堆叠消噪自动编码器有两面(facade):一个ae组成的列表,和一个MLP。在预训练过程中,我们使用第一面(facade),即将模型看成是一个ae组成的列表,然后独立的训练每个ae。在训练的第二阶段,我们使用第二面(facade),这两面可以连接起来是因为:

    • ae可以和MLP的sigmoid层共享参数;
    • 通过MLP的中间层计算得到的潜在的表征可以作为ae的输入。
    class SdA(object):
        """Stacked denoising auto-encoder class (SdA)
    
        A stacked denoising autoencoder model is obtained by stacking several
        dAs. The hidden layer of the dA at layer `i` becomes the input of
        the dA at layer `i+1`. The first layer dA gets as input the input of
        the SdA, and the hidden layer of the last dA represents the output.
        Note that after pretraining, the SdA is dealt with as a normal MLP,
        the dAs are only used to initialize the weights.
        """
    
        def __init__(
            self,
            numpy_rng,
            theano_rng=None,
            n_ins=784,
            hidden_layers_sizes=[500, 500],
            n_outs=10,
            corruption_levels=[0.1, 0.1]
        ):
            """ This class is made to support a variable number of layers.
    
            :type numpy_rng: numpy.random.RandomState
            :param numpy_rng: numpy random number generator used to draw initial
                        weights
    
            :type theano_rng: theano.tensor.shared_randomstreams.RandomStreams
            :param theano_rng: Theano random generator; if None is given one is
                               generated based on a seed drawn from `rng`
    
            :type n_ins: int
            :param n_ins: dimension of the input to the sdA
    
            :type n_layers_sizes: list of ints
            :param n_layers_sizes: intermediate layers size, must contain
                                   at least one value
    
            :type n_outs: int
            :param n_outs: dimension of the output of the network
    
            :type corruption_levels: list of float
            :param corruption_levels: amount of corruption to use for each
                                      layer
            """
    
            self.sigmoid_layers = []
            self.dA_layers = []
            self.params = []
            self.n_layers = len(hidden_layers_sizes)
    
            assert self.n_layers > 0
    
            if not theano_rng:
                theano_rng = RandomStreams(numpy_rng.randint(2 ** 30))
            # allocate symbolic variables for the data
            self.x = T.matrix('x')  # the data is presented as rasterized images
            self.y = T.ivector('y')  # the labels are presented as 1D vector of
                                     # [int] labels

    self.sigmoid_layers 将会保存MLP面(facade)的sigmoid层,然而 self.dA_layers 将会保存和MLP层相连接的dae。

        下一步,我们构建 n_layers sigmoid 层和n_layers dae,这里 n_layers 表示为模型的深度。我们使用 Multilayer Perceptron中介绍的 HiddenLayer 类:将 tanh 非线性替换成逻辑函数 s(x) = frac{1}{1+e^{-x}}). 通过将sigmoid层连接起来形成一个MLP,然后构建dae,使得每个共享权重矩阵和对应于sigmoid层的编码部分偏置 。

            for i in xrange(self.n_layers):
                # construct the sigmoidal layer
    
                # the size of the input is either the number of hidden units of
                # the layer below or the input size if we are on the first layer
                if i == 0:
                    input_size = n_ins
                else:
                    input_size = hidden_layers_sizes[i - 1]
    
                # the input to this layer is either the activation of the hidden
                # layer below or the input of the SdA if you are on the first
                # layer
                if i == 0:
                    layer_input = self.x
                else:
                    layer_input = self.sigmoid_layers[-1].output
    
                sigmoid_layer = HiddenLayer(rng=numpy_rng,
                                            input=layer_input,
                                            n_in=input_size,
                                            n_out=hidden_layers_sizes[i],
                                            activation=T.nnet.sigmoid)
                # add the layer to our list of layers
                self.sigmoid_layers.append(sigmoid_layer)
                # its arguably a philosophical question...
                # but we are going to only declare that the parameters of the
                # sigmoid_layers are parameters of the StackedDAA
                # the visible biases in the dA are parameters of those
                # dA, but not the SdA
                self.params.extend(sigmoid_layer.params)
    
                # Construct a denoising autoencoder that shared weights with this
                # layer
                dA_layer = dA(numpy_rng=numpy_rng,
                              theano_rng=theano_rng,
                              input=layer_input,
                              n_visible=input_size,
                              n_hidden=hidden_layers_sizes[i],
                              W=sigmoid_layer.W,
                              bhid=sigmoid_layer.b)
                self.dA_layers.append(dA_layer)

        现在所需要做的就是在sigmoid层的顶部增加一个逻辑层从而得到一个 MLP.这里会用到Theano3.3-练习之逻辑回归)中介绍的 LogisticRegression类。

            # We now need to add a logistic layer on top of the MLP
            self.logLayer = LogisticRegression(
                input=self.sigmoid_layers[-1].output,
                n_in=hidden_layers_sizes[-1],
                n_out=n_outs
            )
    
            self.params.extend(self.logLayer.params)
            # construct a function that implements one step of finetunining
    
            # compute the cost for second phase of training,
            # defined as the negative log likelihood
            self.finetune_cost = self.logLayer.negative_log_likelihood(self.y)
            # compute the gradients with respect to the model parameters
            # symbolic variable that points to the number of errors made on the
            # minibatch given by self.x and self.y
            self.errors = self.logLayer.errors(self.y)


        这个 SdA 类同样提供方法来为当前层中的dae生成训练函数。并返回一个列表,其中的元素 i 就是实现对应的第 i 层da的一步训练的函数。

        def pretraining_functions(self, train_set_x, batch_size):
            ''' Generates a list of functions, each of them implementing one
            step in trainnig the dA corresponding to the layer with same index.
            The function will require as input the minibatch index, and to train
            a dA you just need to iterate, calling the corresponding function on
            all minibatch indexes.
    
            :type train_set_x: theano.tensor.TensorType
            :param train_set_x: Shared variable that contains all datapoints used
                                for training the dA
    
            :type batch_size: int
            :param batch_size: size of a [mini]batch
    
            :type learning_rate: float
            :param learning_rate: learning rate used during training for any of
                                  the dA layers
            '''
    
            # index to a [mini]batch
            index = T.lscalar('index')  # index to a minibatch

        为了在训练中改变被腐蚀的层或者学习率,我们将它们与theano的变量联系起来:

            corruption_level = T.scalar('corruption')  # % of corruption to use
            learning_rate = T.scalar('lr')  # learning rate to use
            # begining of a batch, given `index`
            batch_begin = index * batch_size
            # ending of a batch given `index`
            batch_end = batch_begin + batch_size
    
            pretrain_fns = []
            for dA in self.dA_layers:
                # get the cost and the updates list
                cost, updates = dA.get_cost_updates(corruption_level,
                                                    learning_rate)
                # compile the theano function
                fn = theano.function(
                    inputs=[
                        index,
                        theano.Param(corruption_level, default=0.2),
                        theano.Param(learning_rate, default=0.1)
                    ],
                    outputs=cost,
                    updates=updates,
                    givens={
                        self.x: train_set_x[batch_begin: batch_end]
                    }
                )
                # append `fn` to the list of functions
                pretrain_fns.append(fn)
    
            return pretrain_fns


        现在任何函数 pretrain_fns[i] 会被当作参数index 和可选的 corruption—腐蚀的程度,或者 lr—学习率。注意到 参数的名字在被构建的时候都是被传入theano 变量的名称,而不是python变量 (learning_rate 或者 corruption_level).。当使用theano的时候,这些是需要记住的。

        我们以同样的风格来建立在微调的时候函数的方法 (train_fnvalid_score 和test_score).

        def build_finetune_functions(self, datasets, batch_size, learning_rate):
            '''Generates a function `train` that implements one step of
            finetuning, a function `validate` that computes the error on
            a batch from the validation set, and a function `test` that
            computes the error on a batch from the testing set
    
            :type datasets: list of pairs of theano.tensor.TensorType
            :param datasets: It is a list that contain all the datasets;
                             the has to contain three pairs, `train`,
                             `valid`, `test` in this order, where each pair
                             is formed of two Theano variables, one for the
                             datapoints, the other for the labels
    
            :type batch_size: int
            :param batch_size: size of a minibatch
    
            :type learning_rate: float
            :param learning_rate: learning rate used during finetune stage
            '''
    
            (train_set_x, train_set_y) = datasets[0]
            (valid_set_x, valid_set_y) = datasets[1]
            (test_set_x, test_set_y) = datasets[2]
    
            # compute number of minibatches for training, validation and testing
            n_valid_batches = valid_set_x.get_value(borrow=True).shape[0]
            n_valid_batches /= batch_size
            n_test_batches = test_set_x.get_value(borrow=True).shape[0]
            n_test_batches /= batch_size
    
            index = T.lscalar('index')  # index to a [mini]batch
    
            # compute the gradients with respect to the model parameters
            gparams = T.grad(self.finetune_cost, self.params)
    
            # compute list of fine-tuning updates
            updates = [
                (param, param - gparam * learning_rate)
                for param, gparam in zip(self.params, gparams)
            ]
    
            train_fn = theano.function(
                inputs=[index],
                outputs=self.finetune_cost,
                updates=updates,
                givens={
                    self.x: train_set_x[
                        index * batch_size: (index + 1) * batch_size
                    ],
                    self.y: train_set_y[
                        index * batch_size: (index + 1) * batch_size
                    ]
                },
                name='train'
            )
    
            test_score_i = theano.function(
                [index],
                self.errors,
                givens={
                    self.x: test_set_x[
                        index * batch_size: (index + 1) * batch_size
                    ],
                    self.y: test_set_y[
                        index * batch_size: (index + 1) * batch_size
                    ]
                },
                name='test'
            )
    
            valid_score_i = theano.function(
                [index],
                self.errors,
                givens={
                    self.x: valid_set_x[
                        index * batch_size: (index + 1) * batch_size
                    ],
                    self.y: valid_set_y[
                        index * batch_size: (index + 1) * batch_size
                    ]
                },
                name='valid'
            )
    
            # Create a function that scans the entire validation set
            def valid_score():
                return [valid_score_i(i) for i in xrange(n_valid_batches)]
    
            # Create a function that scans the entire test set
            def test_score():
                return [test_score_i(i) for i in xrange(n_test_batches)]
    
            return train_fn, valid_score, test_score

        注意到 valid_score 和 test_score 都不是theano函数,而是python函数,分别用在整个验证集和整个测试集上的,从而在这些集合上生成损失值列表。

    二、Putting it all together

     

        下面的代码是用来构建sda的:

        numpy_rng = numpy.random.RandomState(89677)
        print '... building the model'
        # construct the stacked denoising autoencoder class
        sda = SdA(
            numpy_rng=numpy_rng,
            n_ins=28 * 28,
            hidden_layers_sizes=[1000, 1000, 1000],
            n_outs=10
        )


        在训练这个网络的时候有两个阶段:逐层预训练,然后是微调。

        对于预训练阶段来说,我们会在网络的所有层上进行循环。对于每一层来说,我们会使用编译后的theano函数来实现一个SGD步长,从而优化权重来减少这一层的重构误差。该函数将会被用在给定epochs pretraining_epochs的训练集上。

        #########################
        # PRETRAINING THE MODEL #
        #########################
        print '... getting the pretraining functions'
        pretraining_fns = sda.pretraining_functions(train_set_x=train_set_x,
                                                    batch_size=batch_size)
    
        print '... pre-training the model'
        start_time = time.clock()
        ## Pre-train layer-wise
        corruption_levels = [.1, .2, .3]
        for i in xrange(sda.n_layers):
            # go through pretraining epochs
            for epoch in xrange(pretraining_epochs):
                # go through the training set
                c = []
                for batch_index in xrange(n_train_batches):
                    c.append(pretraining_fns[i](index=batch_index,
                             corruption=corruption_levels[i],
                             lr=pretrain_lr))
                print 'Pre-training layer %i, epoch %d, cost ' % (i, epoch),
                print numpy.mean(c)
    
        end_time = time.clock()
    
        print >> sys.stderr, ('The pretraining code for file ' +
                              os.path.split(__file__)[1] +
                              ' ran for %.2fm' % ((end_time - start_time) / 60.))


        微调循环类似于MLP中对应的操作。唯一的差别在于它使用build_finetune_functions给定的函数。

    三、Running the Code

        用户可以通过下面方式来运行代码:

    python code/SdA.py

       默认情况下,该代码对每一层运行15次预训练, batch size 为 1。对于第一层来说,腐蚀程度为0.1,第二层为 0.2,第三层为 0.3。预训练的学习率为 0.001 微调的学习率为0.1。预训练花费的时间为 585.01 分钟,也就是每一个epoch需要13分钟。微调是在444.2分钟内36个epochs下完成的,也就是每个epoch为12.34分钟。。最后得到的验证集错误率为 1.39% ,测试集的错误率为 1.3%。该实验在Intel Xeon E5430 @ 2.66GHz CPU上完成,并且是单线程的 GotoBLAS.

    四、Tips and Tricks

     

        一个提升你代码的运行时间的方法 (假设你有足够的内存可用), 就是计算如何用这个网络(从底层到k-1层)转换你的数据。即,通过训练第一层dA开始。一旦训练好了,你可以计算你数据集中的每一个数据点的隐藏单元的值,然后将这个存储作为一个新的数据集从而用来训练第2层对应的dA。一旦你训练好了第2层的dA,以之前相似的,将其作为第三层的数据集。你现在会发现,其实dAs都是相互独立训练的,然后只是对输入进行非线性转换。一旦所有的dAs都训练好了,你就可以对整个模型进行微调了。

  • 相关阅读:
    如何讓你的程序在退出的時候執行一段代碼?
    05_Python爬蟲入門遇到的坑__總結
    04_Python爬蟲入門遇到的坑__向搜索引擎提交關鍵字02
    03_Python爬蟲入門遇到的坑__向搜索引擎提交關鍵字01
    02_Python爬蟲入門遇到的坑__反爬蟲策略02
    01_Python爬蟲入門遇到的坑__反爬蟲策略01
    Python爬蟲--rrequests庫的基本使用方法
    C#筆記00--最基礎的知識
    為元組中的每一個元素命名
    Filter函數
  • 原文地址:https://www.cnblogs.com/shouhuxianjian/p/4590235.html
Copyright © 2020-2023  润新知