• torch教程[3] 使用pytorch自带的反向传播


    # -*- coding: utf-8 -*-
    import torch
    from torch.autograd import Variable
    
    dtype = torch.FloatTensor
    # dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
    
    # N is batch size; D_in is input dimension;
    # H is hidden dimension; D_out is output dimension.
    N, D_in, H, D_out = 64, 1000, 100, 10
    
    # Create random Tensors to hold input and outputs, and wrap them in Variables.
    # Setting requires_grad=False indicates that we do not need to compute gradients
    # with respect to these Variables during the backward pass.
    x = Variable(torch.randn(N, D_in).type(dtype), requires_grad=False)
    y = Variable(torch.randn(N, D_out).type(dtype), requires_grad=False)
    
    # Create random Tensors for weights, and wrap them in Variables.
    # Setting requires_grad=True indicates that we want to compute gradients with
    # respect to these Variables during the backward pass.
    w1 = Variable(torch.randn(D_in, H).type(dtype), requires_grad=True)
    w2 = Variable(torch.randn(H, D_out).type(dtype), requires_grad=True)
    
    learning_rate = 1e-6
    for t in range(500):
        # Forward pass: compute predicted y using operations on Variables; these
        # are exactly the same operations we used to compute the forward pass using
        # Tensors, but we do not need to keep references to intermediate values since
        # we are not implementing the backward pass by hand.
        y_pred = x.mm(w1).clamp(min=0).mm(w2)
    
        # Compute and print loss using operations on Variables.
        # Now loss is a Variable of shape (1,) and loss.data is a Tensor of shape
        # (1,); loss.data[0] is a scalar value holding the loss.
        loss = (y_pred - y).pow(2).sum()
        print(t, loss.data[0])
    
        # Use autograd to compute the backward pass. This call will compute the
        # gradient of loss with respect to all Variables with requires_grad=True.
        # After this call w1.grad and w2.grad will be Variables holding the gradient
        # of the loss with respect to w1 and w2 respectively.
        loss.backward()
    
        # Update weights using gradient descent; w1.data and w2.data are Tensors,
        # w1.grad and w2.grad are Variables and w1.grad.data and w2.grad.data are
        # Tensors.
        w1.data -= learning_rate * w1.grad.data
        w2.data -= learning_rate * w2.grad.data
    
        # Manually zero the gradients after updating weights
        w1.grad.data.zero_()
        w2.grad.data.zero_()
    

      

  • 相关阅读:
    dinic模板
    匈牙利算法(codevs2776)
    线段树(codevs1082)
    KM模板
    kmp模板,线性完成pos
    (一)Python入门-2编程基本概念:03引用的本质-栈内存和堆内存-内存示意图
    (一)Python入门-2编程基本概念:04标识符-帮助系统简单实用-命名规则
    (一)Python入门-2编程基本概念:05变量的声明-初始化-删除变量-垃圾回收机制
    (一)Python入门-2编程基本概念:06链式赋值-系列解包赋值-常量
    (一)Python入门:05Python程序格式-缩进-行注释-段注释
  • 原文地址:https://www.cnblogs.com/learning-c/p/6985279.html
Copyright © 2020-2023  润新知