• pytorch文档阅读(一)


    本章主要针对pytorch0.4.0英文文档的前两节,顺序可能有些不一样:

    • torch
    • torch.Tensor

    张量 Tensors

    Data type CPU tensor GPU tensor type
    32-bit floating point torch.FloatTensor torch.cuda.FloatTensor torch.float32
    64-bit floating point torch.DoubleTensor torch.cuda.DoubleTensor torch.float64
    16-bit floating point N/A torch.cuda.HalfTensor torch.cuda.float16
    8-bit integer (unsigned) torch.ByteTensor torch.cuda.ByteTensor torch.uint8
    8-bit integer (signed) torch.CharTensor torch.cuda.CharTensor torch.int8
    16-bit integer (signed) torch.ShortTensor torch.cuda.ShortTensor torch.int16
    32-bit integer (signed) torch.IntTensor torch.cuda.IntTensor torch.int32
    64-bit integer (signed) torch.LongTensor torch.cuda.LongTensor torch.int64
    • torch.is_stensor/torch.is_storage
    • torch.set_default_tensor_type()
      这个有用,如果大部分操作是GPU上构建的,可你把默认类型定为cuda tensor
    if torch.cuda.is_available():
        if args.cuda:
            torch.set_default_tensor_type('torch.cuda.FloatTensor')
        if not args.cuda:
            print("WARNING: It looks like you have a CUDA device, but aren't " +
                  "using CUDA.
    Run with --cuda for optimal training speed.")
            torch.set_default_tensor_type('torch.FloatTensor')
    else:
        torch.set_default_tensor_type('torch.FloatTensor')
    
    • torch.numel(input)->int/numel() /nelement()
    • torch.set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None) 打印选项
    • torch.set_flush_denormal(mode) → bool 禁用cpu非常规浮点
    >>> torch.set_flush_denormal(True)
    True
    >>> torch.tensor([1e-323], dtype=torch.float64)
    tensor([ 0.], dtype=torch.float64)
    >>> torch.set_flush_denormal(False)
    True
    >>> torch.tensor([1e-323], dtype=torch.float64)
    tensor(9.88131e-324 *
           [ 1.0000], dtype=torch.float64)
    

    ## 创建操作 Creation Ops

    torch方法后缀_like :创建除了值以外的任何设置相同的tensor
    包括:zeros,ones,empty, full,rand,randint,randn

    • torch.tensor(data, dtype=None, device=None, requires_grad=False) → Tensor
    • torch.from_numpy(ndarray) → Tensor
    • torch.eye(n, m=None, out=None)
    • torch.linspace(start, end, steps=100, out=None) → Tensor
    • torch.logspace(start, end, steps=100, out=None) → Tensor
    • torch.ones(*sizes, out=None) → Tensor
    • torch.empty(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
    • torch.reshape(input, shape) → Tensor 注意这是个坑,
    • torch.rand(*sizes, out=None) → Tensor(均匀分布)
    • torch.randn(*sizes, out=None) → Tensor(正态分布)
    • torch.randperm(n, out=None) → LongTensor(随机整数0,,,n-1)
    • torch.randint(low=0, high, size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
    • torch.arange(start, end, step=1, out=None) → Tensor/torch.range(start, end, step=1, out=None) → Tensor 一个包含末尾,一个不包含
    • torch.zeros(*sizes, out=None) → Tensor

    ## 索引,切片,连接,编译操作

    这些操作绝大多数tensor本身也包含
    tensor方法的通用后缀 _ inplace操作,

    • torch.cat(seq, dim=0, out=None) → Tensor/torch.stack(sequence, dim=0) 常用操作,一个是存在的维度上,一个是新的维度上(新建一个维度,已经存在的维度自然向后挤了)
    • torch.split(tensor, split_size, dim=0)/torch.chunk(tensor, chunks, dim=0)/split()/chunk(),这两个功能相近,一个是沿轴均分指定大小(如果无法整除,最后一块返回较小的块),另一个chunk是返回固定块数(也是和split一样,最后一块返回较小块)
    a = torch.Tensor([1,2,3,4,5])
    b = a.split(2)
    c = a.chunk(3)
    
    • torch.gather(input, dim, index, out=None) → Tensor/gather(dim, index) 这个函数就很迷了,当时学习tensorflow时就研究了好久╮(╯﹏╰)╭,注意所有的index都是torch.LongTensor
     torch.gather(t, 1, torch.LongTensor([[0,0],[1,0]]))
    
    • torch.index_select(input, dim, index, out=None) → Tensor/index_select(dim, index) f非常重要的函数
    • torch.masked_select(input, mask, out=None) → Tensor/masked_select(mask) 注意所有mask都为torch.ByteTensor,同时需要注意mask的shape不一定要和tensor相同数量也不一定要相同,shape中必须有一个轴要和tensor的轴对应,此时按此轴索引
    a = torch.Tensor([[1,2,3],[4,5,6]])
    mask = torch.Tensor([[1,0],[0,0],[1,0]]).type(torch.ByteTensor)
    mask_1 = torch.Tensor([[1],[0]]).type(torch.ByteTensor)
    mask_2 = torch.Tensor([0,1,1]).type(torch.ByteTensor)
    b = a.masked_select(mask)#error
    c = a.masked_select(mask_1)
    d = a.masked_select(mask_2)
    
    • torch.nonzero(input, out=None) → LongTensor 注意是返回的高维索引
    >>> torch.nonzero(torch.Tensor([[0.6, 0.0, 0.0, 0.0],
    ...                             [0.0, 0.4, 0.0, 0.0],
    ...                             [0.0, 0.0, 1.2, 0.0],
    ...                             [0.0, 0.0, 0.0,-0.4]]))
    
     0  0
     1  1
     2  2
     3  3
    [torch.LongTensor of size 4x2]
    
    • torch.squeeze(input, dim=None, out=None)/squeeze(dim=None) 超超超重要的函数
    • torch.stack(sequence, dim=0)
    • torch.t/t()
    • torch.transpose(input, dim0, dim1, out=None) → Tensor/transpose() 交换任意两个维度
    • torch.take(input, indices) → Tensor 展开之后的tensor取索引
    • tensor.permute(dims) 非常重要
    x.permute(2, 0, 1)
    
    • torch.unbind(tensor, dim=0) 移除指定维度,返回一个truple,包含了沿着指定维切片后的各个切片
    • torch.unsqueeze(input, dim, out=None) 插入维度
    • torch.where(condition, x, y) → Tensor 注意condition (ByteTensor)

    随机抽样 Random sampling

    • torch.manual_seed(seed)
    • torch.initial_seed() 注意做对比实验的时为了控制变量,多线程载入数据时每个线程的seed都需要严格设定
    • torch.get_rng_state() ->(ByteTensor)
    • torch.set_rng_state
    • torch.default_generator
    • torch.bernoulli(input, out=None) → Tensor伯努利投硬币,常用于样本的挖掘(hard example)
    • torch.multinomial(input, num_samples,replacement=False, out=None) → Longtensor 多项分布抽取样本
    • torch.normal(means, std, out=None) 离散正态分布中抽取随机数
    torch.normal(means=torch.arange(1, 11), std=torch.arange(1, 0, -0.1))
    
     1.5104
     1.6955
     2.4895
     4.9185
     4.9895
     6.9155
     7.3683
     8.1836
     8.7164
     9.8916
    [torch.FloatTensor of size 10]
    >>> torch.normal(mean=0.5, std=torch.arange(1, 6))
    
      0.5723
      0.0871
     -0.3783
     -2.5689
     10.7893
    [torch.FloatTensor of size 5]
    >>> torch.normal(means=torch.arange(1, 6))
    
     1.1681
     2.8884
     3.7718
     2.5616
     4.2500
    [torch.FloatTensor of size 5]
    

    序列化 Serialization

    • torch.saves
    • torch.load

    并行化 Parallelism

    • torch.get_num_threads
    • torch.set_num_threads(int)

    数学操作Math operations

    tensor有全部的对应数学函数
    挑几个常用的:

    • ceil/floor/frac
    • round
    • torch.clamp(input, min, max, out=None) → Tensor 等价于tensorflow的tf.clip
    • torch.argmax(input, dim=None, keepdim=False)/torch.argmin(input, dim=None, keepdim=False)
    • torch.cumprod(input, dim, out=None) → Tensor $$y_i = x_1 imes x_2 imes x_3 imes dots imes x_i$$/torch.prod(input, dim, keepdim=False, out=None) → Tensor
    • torch.cumsum(input, dim, out=None) → Tensor 同上
    • torch.dist(input, other, p=2) → Tensor p范数/torch.norm(input, p, dim, keepdim=False, out=None) → Tensor
      • torch.mean(input, dim, keepdim=False, out=None) → Tensor 注意keep_dim是是否保持维度不变
    >>> a = torch.randn(4, 4)
    >>> a
    tensor([[-0.3841,  0.6320,  0.4254, -0.7384],
            [-0.9644,  1.0131, -0.6549, -1.4279],
            [-0.2951, -1.3350, -0.7694,  0.5600],
            [ 1.0842, -0.9580,  0.3623,  0.2343]])
    >>> torch.mean(a, 1)
    tensor([-0.0163, -0.5085, -0.4599,  0.1807])
    >>> torch.mean(a, 1, True)
    tensor([[-0.0163],
            [-0.5085],
            [-0.4599],
            [ 0.1807]])
    
    • torch.median() 返回中间值
    >>> a = torch.randn(1, 3)
    >>> a
    tensor([[ 1.5219, -1.5212,  0.2202]])
    >>> torch.median(a)
    tensor(0.2202)
    
    • torch.median(input, dim=-1, keepdim=False, values=None, indices=None) -> (Tensor, LongTensor)
    >>> a = torch.randn(4, 5)
    >>> a
    tensor([[ 0.2505, -0.3982, -0.9948,  0.3518, -1.3131],
            [ 0.3180, -0.6993,  1.0436,  0.0438,  0.2270],
            [-0.2751,  0.7303,  0.2192,  0.3321,  0.2488],
            [ 1.0778, -1.9510,  0.7048,  0.4742, -0.7125]])
    >>> torch.median(a, 1)
    (tensor([-0.3982,  0.2270,  0.2488,  0.4742]), tensor([ 1,  4,  4,  3]))
    
    • torch.std(input, dim, keepdim=False, unbiased=True, out=None) → Tensor/torch.var(input, dim, keepdim=False, unbiased=True, out=None) → Tensor 标准差和方差
    • torch.sum(input, dim, keepdim=False, out=None) → Tensor
    • torch.unique(input, sorted=False, return_inverse=False) 元素去重(只能是1D tensor)

    Comparison Ops

    • torch.eq(input, other, out=None) → Tensor other必须能广播,返回mask
    • torch.equal(tensor1, tensor2) → bool
    • torch.ge/gt/le/lt/ne/ ≥/>/≤/<
    • torch.topk(input, k, dim=None, largest=True, sorted=True, out=None) -> (Tensor, LongTensor)/torch.kthvalue(input, k, dim=None, keepdim=False, out=None) -> (Tensor, LongTensor)
    • torch.max(input) → Tensor
    • torch.max(input, dim, keepdim=False, out=None) -> (Tensor, LongTensor)
    • torch.max(input, other, out=None) → Tensor
    >>> a = torch.randn(4)
    >>> a
    tensor([ 0.2942, -0.7416,  0.2653, -0.1584])
    >>> b = torch.randn(4)
    >>> b
    tensor([ 0.8722, -1.7421, -0.4141, -0.5055])
    >>> torch.max(a, b)
    tensor([ 0.8722, -0.7416,  0.2653, -0.1584])
    
    • torch.min()也有三种方法,使用同max
    • torch.sort(input, dim=None, descending=False, out=None) -> (Tensor, LongTensor)

    BLAS and LAPACK Operations

    各种矩阵的基础运算

    Spectral Ops

    终于算是加上了。。

    Tensor独有的Ops

    tensor的前缀new_方法,是固定变量赋值,适用于ones,zeros,full,tensor(坑)

    >>> tensor = torch.ones((2,), dtype=torch.int8)
    >>> data = [[0, 1], [2, 3]]
    >>> tensor.new_tensor(data)
    tensor([[ 0,  1],[ 2,  3]], dtype=torch.int8)
    
    • torch.Tensor.item() 坑,注意只能是一个值,适合返回loss,acc等
    • apply_(callable) → Tensor(类似于map,python层面的cpu funtion,效率低)
    • cauchy_(median=0, sigma=1, *, generator=None) → Tensor
    • char(),byte(),double() ,int()
    • clone() /copy() 第一个是完全克隆,第二个是可广播的数值
    • contiguous() → Tensor 一些op为了高效运算,默认实现连续内存运算需求的,这时候要保证tensor的连续存储
    • is_contiguous() → bool/is_pinned()/is_cuda/is_pinned()/is_signed()
    • cpu()/cuda()
    • dim()
    • device
    • element_size() → int 返回变量类型的内存占用字节
    • expand(*sizes) → Tensor 重要:扩展dim维1的轴
    >>> x = torch.tensor([[1], [2], [3]])
    >>> x.size()
    torch.Size([3, 1])
    >>> x.expand(3, 4)
    tensor([[ 1,  1,  1,  1],
            [ 2,  2,  2,  2],
            [ 3,  3,  3,  3]])
    >>> x.expand(-1, 4)   # -1 means not changing the size of that dimension
    tensor([[ 1,  1,  1,  1],
            [ 2,  2,  2,  2],
            [ 3,  3,  3,  3]])
    
    • index_copy_(dim, index, tensor) → Tensor 按索引复制元素
    >>> x = torch.zeros(5, 3)
    >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
    >>> index = torch.tensor([0, 4, 2])
    >>> x.index_copy_(0, index, t)
    tensor([[ 1.,  2.,  3.],
            [ 0.,  0.,  0.],
            [ 7.,  8.,  9.],
            [ 0.,  0.,  0.],
            [ 4.,  5.,  6.]])
    
    • index_fill_(dim, index, val) → Tensor
    • map_(tensor, callable)
      Applies callable for each element in self tensor and the given tensor and stores the results in self tensor. self tensor and the given tensor must be broadcastable.
      (待续)

    一些坑

    • new_tensor 会新建变量,use torch.Tensor.requires_grad_() or torch.Tensor.detach()
    • mask_select index_select也会新建变量
    • reshape,resize,review
      (待续)
  • 相关阅读:
    lumen 错误&日志
    Composer设置忽略版本匹配的方法
    Laravel框架数据库CURD操作、连贯操作使用方法
    laravel5-目录结构分析
    Lumen 设置 timezone 时区
    phpstorm laravel单元测试 配置
    使用laravel的Eloquent模型获取数据库的指定列
    phpstorm 配置自带webserver ,配置根目录
    使用 OWIN 作为 ASP.NET Web API 的宿主
    angularjs webstorm 单元测试 Package.json
  • 原文地址:https://www.cnblogs.com/yhyue/p/9261410.html
Copyright © 2020-2023  润新知