• 模型部署之 TensorRT 初步入门


    TensorRT 是 NVIDIA 提出的用于统一模型部署的加速器,可以应用于 NVIDIA 自家设计的硬件平台比如 NVIDIA Tesla A100 显卡,JETSON Xavier 开发板等,它的输入可以是来自各个流行的训练框架,比如 Tensorflow, Pytorch 等训练得到的模型结果。

    官网定义:

    TensorRT is built on CUDA, NVIDIA’s parallel programming model, and enables you to optimize inference for all deep learning frameworks leveraging libraries, development tools and technologies in CUDA-X for artificial intelligence, autonomous machines, high-performance computing, and graphics.

    TensorRT包括 推理优化(inference optimization) 和 runtime 两部分,类似于 MicroSoft 提出的 ONNX Runtime,但 ONNX Runtime 一般只能接收 ONNX 格式的模型,而TensorRT可以接受包括ONNX,Pytorch, Tensorflow等基本上所有框架的模型

    TensorRT在对模型优化时主要进行了5个调整:

    1. Layer and tensor fusion

    kernel fusion
    kernel fusion 的主要目的是提高GPU的利用效率,减少 kernel 的数目,因为每增加一个算子就会增加一份数据读写,而数据读写是相对比较耗时的,同时增加一个算子也会增加一次计算

    • 因此将可以融合的模块比如 conv-bn-relu三个模块就可以融合成一个模块这就可以减少数据读写和多次计算
    • 对于具有同一个输入,且模块内容相同的模块,但是输出不一样的,如上图左的 3 个 1x1 模块,就可以利用并行parallel进行计算,再输出到不同的节点。具体的实现方法后续跟进

    When you have identical kernels which take the same input but just use different weights, you can combine the kernels by making a single kernel wider in a sense that is processes more of these operations in parallel. The output from these horizontally fused kernels will be automatically split up if they feed to different kernels further down the graph。

    2. Precision Calibration

    校准精度,由于这里 inference 过程只需要 forward,并不需要 backward, 因此就不需要 32位的浮点数来进行计算,因此可以合理的采用 fp16 或者 int8 来进行 forward, 这样可以是的模型存储空间更小,更低的内存占用和延迟

    具体的实现方法后续跟进,引用如下:

    TensorRT achieved this by using an automated parameter-free calibration step to change the weighs and activation tensors into lower precision using a representative input sample and this is done such that the model minimizes the accuracy loss.

    3. Kernel Auto-tuning

    对于同一个操作(卷积等)有很多不同的底层实现,TensorRT 可以根据你的参数 比如 batch-size, filter-size, input data size 等或者部署平台去选择最优的实现方法。

    4. Dynamic Tensor Memory

    dynamic tensor memory ensures that memory is allocated for each tensor only for the duration of its usage. This naturally reduces memory footprint and improves memory reuse.

    5. Multi-Stream Execution

    Multi-stream execution is essential when you scale the inference to multiple clients. This is achieved by allowing multiple input streams to use the same model in parallel on a single device


    代码:

    可以使用 TRTorch, torch2trt, 或者TF-TRT对模型进行转换
    TRTorch, torch2trt
    image.png

    pytorch 举例:

    import torch
    from torch2trt import torch2trt
    from torchvision.models.alexnet import alexnet
    
    # create some regular pytorch model...
    model = alexnet(pretrained=True).eval().cuda()
    
    # create example data
    x = torch.ones((1, 3, 224, 224)).cuda()
    
    # convert to TensorRT feeding sample data as input
    model_trt = torch2trt(model, [x])
    
    y = model(x)
    y_trt = model_trt(x)
    
    # check the output against PyTorch
    print(torch.max(torch.abs(y - y_trt)))
    

    模型保存和加载:

    torch.save(model_trt.state_dict(), 'alexnet_trt.pth')
    
    from torch2trt import TRTModule
    model_trt = TRTModule()
    model_trt.load_state_dict(torch.load('alexnet_trt.pth'))
    

    推荐视频:
    Webinar: Deploying Models with TensorRT
    https://www.youtube.com/watch?v=67ev-6Xn30U

  • 相关阅读:
    mysql 8.0.18 mgr节点状态长时间处于RECOVERING 状态
    mgr安装 加入第二个节点报错-[ERROR] [MY-011526] [Repl] Plugin group_replication reported: 'This member has more executed transactions than those present in the grou
    mgr安装-启动主节点报错-[ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Unable to announce tcp port
    sqlserver维护计划无法删除处理
    ERROR 1290 (HY000): The MySQL server is running with the --secure-file-priv option so it cannot execute this statement
    keepalived-2.0.15 编译安装报错
    论自由与素质
    乘法表
    python函数和方法
    python三引号的用法
  • 原文地址:https://www.cnblogs.com/qiulinzhang/p/14318525.html
Copyright © 2020-2023  润新知