• with torch.no_grad()


    在pytorch中,tensor有一个requires_grad参数,如果设置为True,则反向传播时,该tensor就会自动求导。tensor的requires_grad的属性默认为False,若一个节点(叶子变量:自己创建的tensor)requires_grad被设置为True,那么所有依赖它的节点requires_grad都为True(即使其他相依赖的tensor的requires_grad = False)
    x = torch.randn(10, 5, requires_grad = True)
    y = torch.randn(10, 5, requires_grad = False)
    z = torch.randn(10, 5, requires_grad = False)
    w = x + y + z
    print(w.requires_grad)
    
    True

    with torch.no_grad
    上文提到volatile已经被废弃,替代其功能的就是with torch.no_grad。作用与volatile相似,即使一个tensor(命名为x)的requires_grad = True,由x得到的新tensor(命名为w-标量)requires_grad也为False,且grad_fn也为None,即不会对w求导。例子如下所示:

     
    x = torch.randn(10, 5, requires_grad = True)
    y = torch.randn(10, 5, requires_grad = True)
    z = torch.randn(10, 5, requires_grad = True)
    with torch.no_grad():
        w = x + y + z
        print(w.requires_grad)
        print(w.grad_fn)
    print(w.requires_grad)
    
    
    False
    None
    False
  • 相关阅读:
    Alpha 冲刺 (10/10)
    Alpha 冲刺 (9/10)
    Alpha 冲刺 (8/10)
    Alpha 冲刺 (7/10)
    Alpha 冲刺 (6/10)
    Alpha 冲刺 (5/10)
    18软工实践-团队现场编程实战(抽奖系统)
    Alpha 冲刺 (4/10)
    BETA(4)
    BETA(3)
  • 原文地址:https://www.cnblogs.com/h694879357/p/15984070.html
Copyright © 2020-2023  润新知