• 【python实现卷积神经网络】定义训练和测试过程


    代码来源:https://github.com/eriklindernoren/ML-From-Scratch

    卷积神经网络中卷积层Conv2D(带stride、padding)的具体实现:https://www.cnblogs.com/xiximayou/p/12706576.html

    激活函数的实现(sigmoid、softmax、tanh、relu、leakyrelu、elu、selu、softplus):https://www.cnblogs.com/xiximayou/p/12713081.html

    损失函数定义(均方误差、交叉熵损失):https://www.cnblogs.com/xiximayou/p/12713198.html

    优化器的实现(SGD、Nesterov、Adagrad、Adadelta、RMSprop、Adam):https://www.cnblogs.com/xiximayou/p/12713594.html

    卷积层反向传播过程:https://www.cnblogs.com/xiximayou/p/12713930.html

    全连接层实现:https://www.cnblogs.com/xiximayou/p/12720017.html

    批量归一化层实现:https://www.cnblogs.com/xiximayou/p/12720211.html

    池化层实现:https://www.cnblogs.com/xiximayou/p/12720324.html

    padding2D实现:https://www.cnblogs.com/xiximayou/p/12720454.html

    Flatten层实现:https://www.cnblogs.com/xiximayou/p/12720518.html

    上采样层UpSampling2D实现:https://www.cnblogs.com/xiximayou/p/12720558.html

    Dropout层实现:https://www.cnblogs.com/xiximayou/p/12720589.html

    激活层实现:https://www.cnblogs.com/xiximayou/p/12720622.html

    首先是所有的代码:

    from __future__ import print_function, division
    from terminaltables import AsciiTable
    import numpy as np
    import progressbar
    from mlfromscratch.utils import batch_iterator
    from mlfromscratch.utils.misc import bar_widgets
    
    
    class NeuralNetwork():
        """Neural Network. Deep Learning base model.
        Parameters:
        -----------
        optimizer: class
            The weight optimizer that will be used to tune the weights in order of minimizing
            the loss.
        loss: class
            Loss function used to measure the model's performance. SquareLoss or CrossEntropy.
        validation: tuple
            A tuple containing validation data and labels (X, y)
        """
        def __init__(self, optimizer, loss, validation_data=None):
            self.optimizer = optimizer
            self.layers = []
            self.errors = {"training": [], "validation": []}
            self.loss_function = loss()
            self.progressbar = progressbar.ProgressBar(widgets=bar_widgets)
    
            self.val_set = None
            if validation_data:
                X, y = validation_data
                self.val_set = {"X": X, "y": y}
    
        def set_trainable(self, trainable):
            """ Method which enables freezing of the weights of the network's layers. """
            for layer in self.layers:
                layer.trainable = trainable
    
        def add(self, layer):
            """ Method which adds a layer to the neural network """
            # If this is not the first layer added then set the input shape
            # to the output shape of the last added layer
            if self.layers:
                layer.set_input_shape(shape=self.layers[-1].output_shape())
    
            # If the layer has weights that needs to be initialized 
            if hasattr(layer, 'initialize'):
                layer.initialize(optimizer=self.optimizer)
    
            # Add layer to the network
            self.layers.append(layer)
    
        def test_on_batch(self, X, y):
            """ Evaluates the model over a single batch of samples """
            y_pred = self._forward_pass(X, training=False)
            loss = np.mean(self.loss_function.loss(y, y_pred))
            acc = self.loss_function.acc(y, y_pred)
    
            return loss, acc
    
        def train_on_batch(self, X, y):
            """ Single gradient update over one batch of samples """
            y_pred = self._forward_pass(X)
            loss = np.mean(self.loss_function.loss(y, y_pred))
            acc = self.loss_function.acc(y, y_pred)
            # Calculate the gradient of the loss function wrt y_pred
            loss_grad = self.loss_function.gradient(y, y_pred)
            # Backpropagate. Update weights
            self._backward_pass(loss_grad=loss_grad)
    
            return loss, acc
    
        def fit(self, X, y, n_epochs, batch_size):
            """ Trains the model for a fixed number of epochs """
            for _ in self.progressbar(range(n_epochs)):
                
                batch_error = []
                for X_batch, y_batch in batch_iterator(X, y, batch_size=batch_size):
                    loss, _ = self.train_on_batch(X_batch, y_batch)
                    batch_error.append(loss)
    
                self.errors["training"].append(np.mean(batch_error))
    
                if self.val_set is not None:
                    val_loss, _ = self.test_on_batch(self.val_set["X"], self.val_set["y"])
                    self.errors["validation"].append(val_loss)
    
            return self.errors["training"], self.errors["validation"]
    
        def _forward_pass(self, X, training=True):
            """ Calculate the output of the NN """
            layer_output = X
            for layer in self.layers:
                layer_output = layer.forward_pass(layer_output, training)
    
            return layer_output
    
        def _backward_pass(self, loss_grad):
            """ Propagate the gradient 'backwards' and update the weights in each layer """
            for layer in reversed(self.layers):
                loss_grad = layer.backward_pass(loss_grad)
    
        def summary(self, name="Model Summary"):
            # Print model name
            print (AsciiTable([[name]]).table)
            # Network input shape (first layer's input shape)
            print ("Input Shape: %s" % str(self.layers[0].input_shape))
            # Iterate through network and get each layer's configuration
            table_data = [["Layer Type", "Parameters", "Output Shape"]]
            tot_params = 0
            for layer in self.layers:
                layer_name = layer.layer_name()
                params = layer.parameters()
                out_shape = layer.output_shape()
                table_data.append([layer_name, str(params), str(out_shape)])
                tot_params += params
            # Print network configuration table
            print (AsciiTable(table_data).table)
            print ("Total Parameters: %d
    " % tot_params)
    
        def predict(self, X):
            """ Use the trained model to predict labels of X """
            return self._forward_pass(X, training=False)

    接着我们来一个一个函数进行分析:

    1、初始化__init__:这里面定义好优化器optimizer、模型层layers、错误errors、损失函数loss_function、用于显示进度条progressbar,这里从mlfromscratch.utils.misc中导入了bar_widgets,我们看看这是什么:

    bar_widgets = [
        'Training: ', progressbar.Percentage(), ' ', progressbar.Bar(marker="-", left="[", right="]"),
        ' ', progressbar.ETA()
    ]

    2、set_trainable():用于设置哪些模型层需要进行参数的更新

    3、add():将一个模块放入到卷积神经网络中,例如卷积层、池化层、激活层等等。

    4、test_on_batch():使用batch进行测试,这里不需要进行反向传播。

    5、train_on_batch():使用batch进行训练,包括前向传播计算损失以及反向传播更新参数。

    6、fit():喂入数据进行训练或验证,这里需要定义好epochs和batch_size的大小,同时有一个读取数据的函数batch_iterator(),位于mlfromscratch.utils下的data_manipulation.py中:

    def batch_iterator(X, y=None, batch_size=64):
        """ Simple batch generator """
        n_samples = X.shape[0]
        for i in np.arange(0, n_samples, batch_size):
            begin, end = i, min(i+batch_size, n_samples)
            if y is not None:
                yield X[begin:end], y[begin:end]
            else:
                yield X[begin:end]

    7、_forward_pass():模型层的前向传播。

    8、_backward_pass():模型层的反向传播。

    9、summary():用于输出模型的每层的类型、参数数量以及输出大小。

    10、predict():用于输出预测值。

    不难发现,该代码是借鉴了tensorflow中的一些模块的设计思想。

  • 相关阅读:
    ThinkPHP Model+数据库的切换使用
    关于SSD安装系统的一些设置(PE安装win 7)
    PHP实现文件下载:header
    Thinkphp 使用PHPExcel导入,栗子
    Ueditor 的使用(这里以php+ci为例)
    js获取鼠标选中的文字内容
    WNMP 下 Nginx 配置 (使用了phpfind一键安装环境)
    javascript 实现 trim
    javascript 获取 CSS 样式表属性
    javascript 删除节点问题
  • 原文地址:https://www.cnblogs.com/xiximayou/p/12725873.html
Copyright © 2020-2023  润新知