• 神经网络与深度学习(邱锡鹏)编程练习4 FNN 简单神经网络 pytorch


    https://blog.csdn.net/bit452/article/details/109686936

    import torch
    from torchvision import transforms
    from torchvision import datasets
    from torch.utils.data import DataLoader
    import torch.nn.functional as F
    import torch.optim as optim
    
    # ref: https://blog.csdn.net/bit452/article/details/109686936
    
    # prepare dataset
    batch_size = 64
    transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])  # 归一化,均值和方差
    
    train_dataset = datasets.MNIST(root='../dataset/mnist/', train=True, download=True, transform=transform)
    train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
    test_dataset = datasets.MNIST(root='../dataset/mnist/', train=False, download=True, transform=transform)
    test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)
    
    
    # design model using class
    class Net(torch.nn.Module):
        def __init__(self):
            super(Net, self).__init__()
            self.l1 = torch.nn.Linear(784, 512)
            self.l2 = torch.nn.Linear(512, 256)
            self.l3 = torch.nn.Linear(256, 128)
            self.l4 = torch.nn.Linear(128, 64)
            self.l5 = torch.nn.Linear(64, 10)
    
        def forward(self, x):
            x = x.view(-1, 784)  # -1其实就是自动获取mini_batch
            x = F.relu(self.l1(x))
            x = F.relu(self.l2(x))
            x = F.relu(self.l3(x))
            x = F.relu(self.l4(x))
            return self.l5(x)  # 最后一层不做激活,不进行非线性变换
    
    
    model = Net()
    
    # construct loss and optimizer
    criterion = torch.nn.CrossEntropyLoss()
    optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
    
    
    # training cycle forward, backward, update
    def train(epoch):
        running_loss = 0.0
        for batch_idx, data in enumerate(train_loader, 0):
            # 获得一个批次的数据和标签
            inputs, target = data
            optimizer.zero_grad()
            # 获得模型预测结果(64, 10)
            outputs = model(inputs)
            # 交叉熵代价函数outputs(64,10),target(64)
            loss = criterion(outputs, target)
            loss.backward()
            optimizer.step()
    
            running_loss += loss.item()
            if batch_idx % 300 == 299:
                print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
                running_loss = 0.0
    
    
    def test():
        correct = 0
        total = 0
        with torch.no_grad():
            for data in test_loader:
                images, labels = data
                outputs = model(images)
                _, predicted = torch.max(outputs.data, dim=1)  # dim = 1 列是第0个维度,行是第1个维度
                total += labels.size(0)
                correct += (predicted == labels).sum().item()  # 张量之间的比较运算
        print('accuracy on test set: %d %% ' % (100 * correct / total))
    
    
    if __name__ == '__main__':
        for epoch in range(10):
            train(epoch)
            test()
  • 相关阅读:
    一类涉及矩阵范数的优化问题
    MATLAB小实例:读取Excel表格中多个Sheet的数据
    深度多视图子空间聚类
    具有协同训练的深度嵌入多视图聚类
    结构深层聚类网络
    一种数据选择偏差下的去相关聚类方法
    shell编程基础二
    shell编程基础一
    如何处理Teamcenter流程回退情况
    汽车行业数字化车间解决方案
  • 原文地址:https://www.cnblogs.com/hbuwyg/p/16342893.html
Copyright © 2020-2023  润新知