• 模型部署 ONNX ONNX runtim


    通常我们在训练模型时可以使用很多不同的框架,比如有的同学喜欢用 Pytorch,有的同学喜欢使用 TensorFLow,也有的喜欢 MXNet,以及深度学习最开始流行的 Caffe等等,这样不同的训练框架就导致了产生不同的模型结果包,在模型进行部署推理时就需要不同的依赖库,而且同一个框架比如tensorflow 不同的版本之间的差异较大, 为了解决这个混乱问题,LF AI 这个组织联合 Facebook, MicroSoft等公司制定了机器学习模型的标准,这个标准叫做ONNX, Open Neural Network Exchage,所有其他框架产生的模型包 (.pth, .pb) 都可以转换成这个标准格式,转换成这个标准格式后,就可以使用统一的 ONNX Runtime等工具进行统一部署。

    这其实可以和 JVM 对比,
    Java virtual machine (JVM) is a virtual machine that enables a computer to run Java programs as well as programs written in other languages that are also compiled to Java bytecode. The JVM is detailed by a specification that formally describes what is required in a JVM implementation. Having a specification ensures interoperability of Java programs across different implementations so that program authors using the Java Development Kit (JDK) need not worry about idiosyncrasies of the underlying hardware platform.

    JAVA中有 JAVA 语言 + .jar 包 + JVM,同时还有其他的语言比如 Scala等也是建立在 JVM上运行的,因此不同的语言只要都最后将程序转换成 JVM可以统一识别的格式,就可以在统一的跨平台 JVM JAVA 虚拟机上运行。这里JVM使用的 包是二进制包,因此里面的内容是不可知的,人类难以直观理解的。

    这里 ONNX 标准采取了谷歌开发 protocal buffers 作为格式标准,这个格式是在 XML, json的基础上发展的,是一个人类易理解的格式。ONNX 官网对ONNX的介绍如下:
    ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers.
    ONNX支持的模型来源,基本上囊括了我们日常使用的所有框架:
    ONNX支持的模型来源

    ONNX的文件格式,采用的是谷歌的 protocal buffers,和 caffe采用的一致。

    ONNX定义的数据类包括了我们常用的数据类型,用来定义模型中的输出输出格式

    ONNX中定义了很多我们常用的节点,比如 Conv,ReLU,BN, maxpool等等约124种,同时也在不停地更新中,当遇到自带节点库中没有的节点时,我们也可以自己写一个节点

    • 有了输入输出,以及计算节点,就可以根据 pytorch框架中的 forward 记录一张模型从输入图片到输出的计算图,ONNX 就是将这张计算图用标准的格式存储下来了,可以通过一个工具 Netron对 ONNX 进行可视化,如第一张图右侧所示;
    • 保存成统一的 ONNX 格式后,就可以使用统一的运行平台来进行 inference。

    pytorch原生支持 ONNX 格式转码,下面是实例:

    1. 将pytorch模型转换为onnx格式,直接傻瓜式调用 torch.onnx.export(model, input, output_name)

    import torch
    from torchvision import models
    
    net = models.resnet.resnet18(pretrained=True)
    dummpy_input = torch.randn(1,3,224,224)
    torch.onnx.export(net, dummpy_input, 'resnet18.onnx')
    

    2. 对生成的 onnx 进行查看

    import onnx
    
    # Load the ONNX model
    model = onnx.load("resnet18.onnx")
    
    # Check that the IR is well formed
    onnx.checker.check_model(model)
    
    # Print a human readable representation of the graph
    print(onnx.helper.printable_graph(model.graph))
    

    输出:
    可以看到其输出有3个dict,一个是 input, 一个是 initializers,以及最后一个是operators把输入和权重 initialization 进行类似于 forward操作,在最后一个dict operators中其返回是 %191,也就是 gemm 最后一个全连接的输出

    graph torch-jit-export (
      %input.1[FLOAT, 1x3x224x224]
    ) initializers (
      %193[FLOAT, 64x3x7x7]
      %194[FLOAT, 64]
      %196[FLOAT, 64x64x3x3]
      %197[FLOAT, 64]
      %199[FLOAT, 64x64x3x3]
      %200[FLOAT, 64]
      %202[FLOAT, 64x64x3x3]
      %203[FLOAT, 64]
      %205[FLOAT, 64x64x3x3]
      %206[FLOAT, 64]
      %208[FLOAT, 128x64x3x3]
      %209[FLOAT, 128]
      %211[FLOAT, 128x128x3x3]
      %212[FLOAT, 128]
      %214[FLOAT, 128x64x1x1]
      %215[FLOAT, 128]
      %217[FLOAT, 128x128x3x3]
      %218[FLOAT, 128]
      %220[FLOAT, 128x128x3x3]
      %221[FLOAT, 128]
      %223[FLOAT, 256x128x3x3]
      %224[FLOAT, 256]
      %226[FLOAT, 256x256x3x3]
      %227[FLOAT, 256]
      %229[FLOAT, 256x128x1x1]
      %230[FLOAT, 256]
      %232[FLOAT, 256x256x3x3]
      %233[FLOAT, 256]
      %235[FLOAT, 256x256x3x3]
      %236[FLOAT, 256]
      %238[FLOAT, 512x256x3x3]
      %239[FLOAT, 512]
      %241[FLOAT, 512x512x3x3]
      %242[FLOAT, 512]
      %244[FLOAT, 512x256x1x1]
      %245[FLOAT, 512]
      %247[FLOAT, 512x512x3x3]
      %248[FLOAT, 512]
      %250[FLOAT, 512x512x3x3]
      %251[FLOAT, 512]
      %fc.bias[FLOAT, 1000]
      %fc.weight[FLOAT, 1000x512]
    ) {
      %192 = Conv[dilations = [1, 1], group = 1, kernel_shape = [7, 7], pads = [3, 3, 3, 3], strides = [2, 2]](%input.1, %193, %194)
      %125 = Relu(%192)
      %126 = MaxPool[kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%125)
      %195 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%126, %196, %197)
      %129 = Relu(%195)
      %198 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%129, %199, %200)
      %132 = Add(%198, %126)
      %133 = Relu(%132)
      %201 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%133, %202, %203)
      %136 = Relu(%201)
      %204 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%136, %205, %206)
      %139 = Add(%204, %133)
      %140 = Relu(%139)
      %207 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%140, %208, %209)
      %143 = Relu(%207)
      %210 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%143, %211, %212)
      %213 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [2, 2]](%140, %214, %215)
      %148 = Add(%210, %213)
      %149 = Relu(%148)
      %216 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%149, %217, %218)
      %152 = Relu(%216)
      %219 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%152, %220, %221)
      %155 = Add(%219, %149)
      %156 = Relu(%155)
      %222 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%156, %223, %224)
      %159 = Relu(%222)
      %225 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%159, %226, %227)
      %228 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [2, 2]](%156, %229, %230)
      %164 = Add(%225, %228)
      %165 = Relu(%164)
      %231 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%165, %232, %233)
      %168 = Relu(%231)
      %234 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%168, %235, %236)
      %171 = Add(%234, %165)
      %172 = Relu(%171)
      %237 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%172, %238, %239)
      %175 = Relu(%237)
      %240 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%175, %241, %242)
      %243 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [2, 2]](%172, %244, %245)
      %180 = Add(%240, %243)
      %181 = Relu(%180)
      %246 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%181, %247, %248)
      %184 = Relu(%246)
      %249 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%184, %250, %251)
      %187 = Add(%249, %181)
      %188 = Relu(%187)
      %189 = GlobalAveragePool(%188)
      %190 = Flatten[axis = 1](%189)
      %191 = Gemm[alpha = 1, beta = 1, transB = 1](%190, %fc.weight, %fc.bias)
      return %191
    }
    

    3. 对生成的ONNX进行可视化:

    onnx的可是支持有两个,一个是 netron, 一个是百度飞桨开发的visualDL
    这里介绍 netron的下载安装:https://github.com/lutzroeder/Netron,对于 mac用户可以安装成功直接打开软件进行图形化选取onnx地址就可以打开

    ||

    netron可视化图

    4. ONNX Runtime

    支持ONNX的runtime就是类似于JVM将统一的ONNX格式的模型包运行起来,包括对ONNX 模型进行解读,优化(融合conv-bn等操作),运行。
    支持ONNX格式的runtime
    这里介绍 microsoft 开发的 ONNX Runtime

    4.1 ONNXRuntime的安装

    https://github.com/microsoft/onnxruntime
    对于使用cpu来进行推理的 mac os 可以使用

    brew install libomp
    pip install onnxruntime
    

    推理

    import onnxruntime as rt
    import numpy as  np
    data = np.array(np.random.randn(1,3,224,224))
    sess = rt.InferenceSession('resnet18.onnx')
    input_name = sess.get_inputs()[0].name
    label_name = sess.get_outputs()[0].name
    
    pred_onx = sess.run([label_name], {input_name:data.astype(np.float32)})[0]
    print(pred_onx)
    print(np.argmax(pred_onx)
    

    可以看到,这样推理就不需要其他各种各样的pytorch等依赖,方便部署。

    推荐两个易懂的视频讲解:
    Everything You Want to Know About ONNX
    MicroSoft onnx and onnx runtim

  • 相关阅读:
    用户控件赋值
    计算一串数字中每个数字出现的次数
    如何理解c和c++的复杂类型声明
    STM32 NVIC学习
    stm32f10x_flash.c中文版
    IBM中国研究院Offer之感言——能力是一种态度
    对于STM32别名区的理解 (转载)
    STM32时钟学习之STM3210X_RCC.H解读
    STM32 DMA
    STM32 内部时钟输出PA.8(MCO)
  • 原文地址:https://www.cnblogs.com/qiulinzhang/p/14317346.html
Copyright © 2020-2023  润新知