• MMDeploy安装笔记


    MMDeploy的TensorRT教程

    Step1: 创建虚拟环境并且安装MMDetection

    conda create -n openmmlab python=3.7 -y
    conda activate openmmlab
    
    conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y
    
    # install mmcv
    pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8/index.html
    
    # install mmdetection
    git clone https://github.com/open-mmlab/mmdetection.git
    cd mmdetection
    pip install -r requirements/build.txt
    pip install -v -e .
    

    Step2: 下载MMDetectin中训练好的权重

    Download the checkpoint from this link and put it in the {MMDET_ROOT}/checkpoints where {MMDET_ROOT} is the root directory of your MMDetection codebase.

    Step3: 下载安装MMDeploy

    • 在anaconda中运行下列命令来安装MMDeploy
    conda activate openmmlab
    
    git clone https://github.com/open-mmlab/mmdeploy.git
    cd mmdeploy
    git submodule update --init --recursive
    pip install -e .  # 安装MMDeploy
    

    Step4: Install TensorRT

    1. install TensorRT through tar file

    2. After installation, you’d better add TensorRT environment variables to bashrc by

    cd /the/path/of/tensorrt/tar/gz/file
    tar -zxvf TensorRT-8.2.3.0.Linux.x86_64-gnu.cuda-11.4.cudnn8.2.tar.gz
    
    # 将下面的导入到 ~/.bashrc
    export TENSORRT_DIR=$(pwd)/TensorRT-8.2.3.0
    export LD_LIBRARY_PATH=$TENSORRT_DIR/lib:$LD_LIBRARY_PATH
    

    Step5: Install cuDNN

    1. install cudnn8.2 through tar file

    2. Extract the compressed file and set the environment variables

    cd /the/path/of/cudnn/tgz/file
    tar -zxvf cudnn-11.3-linux-x64-v8.2.1.32.tgz
    
    # 将下面的导入到 ~/.bashrc
    export CUDNN_DIR=$(pwd)/cuda
    export LD_LIBRARY_PATH=$CUDNN_DIR/lib64:$LD_LIBRARY_PATH
    

    Step6: Build Model Converter

    Step6-1: Build Custom Ops

    • TensorRT Custom Ops
    cd ${MMDEPLOY_DIR}
    mkdir -p build && cd build
    
    cmake -DCMAKE_CXX_COMPILER=g++-7 \
    	  -DMMDEPLOY_TARGET_BACKENDS=trt \
    	  -DTENSORRT_DIR=${TENSORRT_DIR} \
    	  -DCUDNN_DIR=${CUDNN_DIR} ..
    	  
    make -j$(nproc)
    

    Step6-2: install Model Converter

    cd ${MMDEPLOY_DIR}
    pip install -e .
    

    Step6-3: 验证模型是否能够进行转换

    python ${MMDEPLOY_DIR}/tools/check_env.py
    
    # 如果正常输出会得到:
    # 2022-05-04 10:13:07,140 - mmdeploy - INFO - tensorrt: 8.2.3.0	ops_is_avaliable : True
    

    Step6-4: Convert Model

    • Once you have installed MMDeploy, you can convert the PyTorch model in the OpenMMLab model zoo to the backend model with one magic spell!
    # Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR}
    # If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console.
    
    python ${MMDEPLOY_DIR}/tools/deploy.py \
    	   ${MMDEPLOY_DIR}/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
    	   ${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
    	   ${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
    	   ${MMDET_DIR}/demo/demo.jpg \
    	   --work-dir work_dirs \  # 转换好的模型保存目录
    	   --device cuda:0 \  # 将cuda:0 更改成cuda???
    	   --show \  # 展示使用后端推理框架,和原来pytorch推理的两张图片
    	   --dump-info  # 输出,可用与SDK
    	   
    

    At the same time, an onnx model file end2end.onnx and ene2end.engine deploy.json detail.json pipeline.json (SDK config files) will generate on the work directory work_dirs.

    Step6-5: Inference Model

    • Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
    from mmdeploy.apis import inference_model
    
    deploy_cfg = "/home/zranguai/Deploy/MMDeploy/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py"
    model_cfg = "/home/zranguai/Deploy/mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py"
    backend_files = ["/home/zranguai/Deploy/MMDeploy/work_dirs/end2end.engine"]
    img = "/home/zranguai/Deploy/mmdetection/demo/demo.jpg" 
    device = 'cuda:0'
    
    result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
    print(result)
    

    Step6-6: Evaluate Model

    • You might wonder that does the backend model have the same precision as the original one? How fast can the model run? MMDeploy provides tools to test the model.
    python ${MMDEPLOY_DIR}/tools/test.py \
    	   ${MMDEPLOY_DIR}/configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \
    	   ${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
    	   --model /home/zranguai/Deploy/MMDeploy/work_dirs/end2end.engine \
    	   --metrics "bbox" \
    	   --device cuda:0 
    

    Step7: Build SDK

    Step7-1: build MMDeploy SDK for TensorRT

    注意: 30系显卡需要将pplcv安装到最新版本。参考issue

    cd ${MMDEPLOY_DIR}
    mkdir -p build && cd build
    
    cmake -DMMDEPLOY_BUILD_SDK=ON \
    	  -DCMAKE_CXX_COMPILER=g++-7 \
    	  -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \
    	  -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \
    	  -DMMDEPLOY_TARGET_BACKENDS=trt \
    	  -Dpplcv_DIR=${PPLCV_DIR}/cuda-build/install/lib/cmake/ppl \  # pplcv到最新版本 A high-performance image processing library of openPPL. ref:https://mmdeploy.readthedocs.io/en/latest/build/linux.html#install-dependencies-for-sdk
    	  -DTENSORRT_DIR=${TENSORRT_DIR} \
    	  -DCUDNN_DIR=${CUDNN_DIR} \
    	  -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
    	  -Dspdlog_DIR=/usr/lib/x86_64-linux-gnu/cmake/spdlog \
    	  -DMMDEPLOY_CODEBASES=mmdet ..
    	  
    make -j$(nproc) && make install
    

    Step7-2: build demo

    cd ${MMDEPLOY_DIR}/build/install/example
    mkdir -p build && cd build
    
    cmake -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
    	  -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
    make object_detection
    
    # suppress verbose logs
    export SPDLOG_LEVEL=warn
    
    # running the object detection example
    ./object_detection cuda ${work_dirs} ${path/to/an/image}
    # 例子: ./object_detection cuda ${MMDEPLOY_DIR}/work_dirs ${MMDET_DIR}/demo/demo.jpg
    

    在Clion中调试代码

    setting中设置:
    CMake options:
    -DMMDeploy_DIR=/home/zranguai/Deploy/MMDeploy/build/install/lib/cmake/MMDeploy -DTENSORRT_DIR=/home/zranguai/Deploy/Backend/TensorRT/TensorRT-8.2.3.0 -DCUDNN_DIR=/home/zranguai/Deploy/Backend/TensorRT/cuda
    
    Build directory:
    /home/zranguai/Deploy/MMDeploy/build/install
    
    Build options:
    object_detection
    
    configuation:
    cuda /home/zranguai/Deploy/MMDeploy/work_dirs /home/zranguai/Deploy/MMDeploy/demo/demo.jpg
    

    +++++++++++++++++++我是分割线++++++++++++++

    MMDeploy的onnxruntime教程

    • 参考官方教程

    Here is an example of how to deploy and inference Faster R-CNN model of MMDetection from scratch.

    step1: 创建虚拟环境并且安装MMDetection

    Create Virtual Environment and Install MMDetection.

    Please run the following command in Anaconda environment to install MMDetection.

    conda create -n openmmlab python=3.7 -y
    conda activate openmmlab
    
    conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y
    
    # install mmcv
    pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8/index.html
    
    # install mmdetection
    git clone https://github.com/open-mmlab/mmdetection.git
    cd mmdetection
    pip install -r requirements/build.txt
    pip install -v -e .
    

    step2: 下载MMDetectin中训练好的权重

    Download the Checkpoint of Faster R-CNN

    Download the checkpoint from this link and put it in the {MMDET_ROOT}/checkpoints where {MMDET_ROOT} is the root directory of your MMDetection codebase.

    step3: 安装MMDeploy和ONNX Runtime

    Install MMDeploy and ONNX Runtime

    step3-1: 安装MMDeploy

    Please run the following command in Anaconda environment to install MMDeploy.

    conda activate openmmlab
    
    git clone https://github.com/open-mmlab/mmdeploy.git
    cd mmdeploy
    git submodule update --init --recursive
    pip install -e .  # 安装MMDeploy
    
    step3-2a: 下载onnxruntime

    Once we have installed the MMDeploy, we should select an inference engine for model inference. Here we take ONNX Runtime as an example. Run the following command to install ONNX Runtime:

    pip install onnxruntime==1.8.1
    

    Then download the ONNX Runtime library to build the mmdeploy plugin for ONNX Runtime:

    step3-2b: 制作onnxruntime的插件(模型转换会需要)
    wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
    
    tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
    cd onnxruntime-linux-x64-1.8.1
    export ONNXRUNTIME_DIR=$(pwd)
    export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH  # 也可将这两句写进~/.bashrc
    
    
    cd ${MMDEPLOY_DIR} # To MMDeploy root directory
    mkdir -p build && cd build
    
    # build ONNXRuntime custom ops
    cmake -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
    make -j$(nproc)
    
    step3-2c: build MMDeploy SDK(使用C的接口会用到)
    # build MMDeploy SDK
    cmake -DMMDEPLOY_BUILD_SDK=ON \
          -DCMAKE_CXX_COMPILER=g++-7 \
          -DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \  # 这里的opencv安装可参考这里https://mmdeploy.readthedocs.io/en/latest/build/linux.html#install-dependencies-for-sdk
          -Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \  # 这里的spdlog安装可参考这里https://mmdeploy.readthedocs.io/en/latest/build/linux.html#install-dependencies-for-sdk
          -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
          -DMMDEPLOY_TARGET_BACKENDS=ort \
          -DMMDEPLOY_CODEBASES=mmdet ..
    make -j$(nproc) && make install
    
    # build MMDeploy SDK具体案例
    cmake -DMMDEPLOY_BUILD_SDK=ON \
    -DCMAKE_CXX_COMPILER=g++-7 \
    -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \  # 通过apt-get安装的
    -Dspdlog_DIR=/usr/lib/x86_64-linux-gnu/cmake/spdlog \
    -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
    -DMMDEPLOY_TARGET_BACKENDS=ort \
    -DMMDEPLOY_CODEBASES=mmdet ..
    
    # 其中${MMDEPLOY_DIR} ${MMDET_DIR} ${ONNXRUNTIME_DIR}都可以写在 ~/.bashrc里面然后source ~/.bashrc生效
    
    补充: 验证后端和插件是否安装成功
    python ${MMDEPLOY_DIR}/tools/check_env.py
    

    step4: Model Conversion

    Once we have installed MMDetection, MMDeploy, ONNX Runtime and built plugin for ONNX Runtime, we can convert the Faster R-CNN to a .onnx model file which can be received by ONNX Runtime. Run following commands to use our deploy tools:

    # Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR}
    # If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console.
    
    python ${MMDEPLOY_DIR}/tools/deploy.py \
        ${MMDEPLOY_DIR}/configs/mmdet/detection/detection_onnxruntime_dynamic.py \
        ${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
        ${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
        ${MMDET_DIR}/demo/demo.jpg \
        --work-dir work_dirs \  # 转换好的模型保存目录
        --device cpu \
        --show \  # 展示使用后端推理框架,和原来pytorch推理的两张图片
        --dump-info  # 输出方便,可用与SDK
        
    # 补充    
    # ${MMDEPLOY_DIR}和${MMDET_DIR}已经写进了~/.bashrc
    # 转换好了模型可以通过python接口进行推理
    例如: Inference Model
    Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
    
    from mmdeploy.apis import inference_model
    
    result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
    

    If the script runs successfully, two images will display on the screen one by one. The first image is the infernce result of ONNX Runtime and the second image is the result of PyTorch. At the same time, an onnx model file end2end.onnx and three json files (SDK config files) will generate on the work directory work_dirs.

    step5: Run MMDeploy SDK demo

    After model conversion, SDK Model is saved in directory ${work_dir}.
    Here is a recipe for building & running object detection demo.

    cd build/install/example
    
    # path to onnxruntime ** libraries **
    export LD_LIBRARY_PATH=/path/to/onnxruntime/lib
    # 例子: export LD_LIBRARY_PATH=/home/zranguai/Deploy/Backend/ONNXRuntime/onnxruntime-linux-x64-1.8.1/lib
    
    mkdir -p build && cd build
    cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \
          -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
    make object_detection
    
    # 例子:
    # cmake -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
    #       -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
    
    
    # suppress verbose logs
    export SPDLOG_LEVEL=warn
    
    # running the object detection example
    ./object_detection cpu ${work_dirs} ${path/to/an/image}
    # 例子: ./object_detection cpu ${MMDEPLOY_DIR}/work_dirs ${MMDET_DIR}/demo/demo.jpg
    

    If the demo runs successfully, an image named "output_detection.png" is supposed to be found showing detection objects.

    ++++++++++++++++++++++++++++++++分割线++++++

    MMDeploy的OpenVINO教程

    Step1: 创建虚拟环境并且安装MMDetection

    conda create -n openmmlab python=3.7 -y
    conda activate openmmlab
    
    conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y
    
    # install mmcv
    pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8/index.html
    
    # install mmdetection
    git clone https://github.com/open-mmlab/mmdetection.git
    cd mmdetection
    pip install -r requirements/build.txt
    pip install -v -e .
    

    Step2: 下载MMDetectin中训练好的权重

    Download the checkpoint from this link and put it in the {MMDET_ROOT}/checkpoints where {MMDET_ROOT} is the root directory of your MMDetection codebase.

    Step3: 下载安装MMDeploy

    • 在anaconda中运行下列命令来安装MMDeploy
    conda activate openmmlab
    
    git clone https://github.com/open-mmlab/mmdeploy.git
    cd mmdeploy
    git submodule update --init --recursive
    pip install -e .  # 安装MMDeploy
    

    Step4: 下载OpenVINO

    pip install openvino-dev
    

    step4-1: 根据官网提示: 这里的openvino不需要custom ops

    step4-2: 可选项: 下载用于使用OpenVINO的SDK

    • Optional. If you want to use OpenVINO in MMDeploy SDK, please install and configure it by following the guild

    • 参考安装教程

    1. OpenVINO安装
    tar -xvzf l_openvino_toolkit_p_2020.4.287.tgz
    cd l_openvino_toolkit_p_2020.4.287
    sudo ./install_GUI.sh 一路next安装
    cd /opt/intel/openvino/install_dependencies
    sudo ./install_openvino_dependencies.sh
    vi ~/.bashrc
    
    1. 把如下几行放置到 bashrc 文件尾
    # set env for openvino
    source /opt/intel/openvino_2021/bin/setupvars.sh  # 注意找到是自己的路径
    export INTEL_OPENVINO_DIR=/opt/intel/openvino_2021
    export LD_LIBRARY_PATH=/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64
    
    1. source ~/.bashrc 激活环境
    2. 模型优化配置步骤
    cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
    
    sudo ./install_prerequisites.sh  # 可以只安装onnx的
    

    step4-3: build MMDeploy SDK(openvino)

    cd ${MMDEPLOY_DIR} # To MMDeploy root directory
    mkdir -p build && cd build
    
    cmake -DMMDEPLOY_BUILD_SDK=ON \
          -DCMAKE_CXX_COMPILER=g++-7 \
          -DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \
          -Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \
          -DInferenceEngine_DIR=${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/share \
          -DMMDEPLOY_TARGET_BACKENDS=openvino \
          -DMMDEPLOY_CODEBASES=mmdet ..
    make -j$(nproc) && make install
    
    # build MMDeploy SDK具体案例
    cmake -DMMDEPLOY_BUILD_SDK=ON \
          -DCMAKE_CXX_COMPILER=g++-7 \
          -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \  # 这里设置apt-get下载的opencv
          -Dspdlog_DIR=/usr/lib/x86_64-linux-gnu/cmake/spdlog \
          -DInferenceEngine_DIR=${INTEL_OPENVINO_DIR}/deployment_tools/inference_engine/share \  
          -DMMDEPLOY_TARGET_BACKENDS=openvino \
          -DMMDEPLOY_CODEBASES=mmdet ..
          
    # ${INTEL_OPENVINO_DIR}写进了~/.bashrc
    
    补充: 验证后端和插件是否安装成功(注意openvino不需要安装插件)
    python ${MMDEPLOY_DIR}/tools/check_env.py
    
    # 当把这个export LD_LIBRARY_PATH=/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64写进~/.bashrc里面时候,会导致出现libopencv_ml.so.4.5: cannot open shared object file: No such file or directory  ??
    

    Step5: Model Conversion(这一步也可以放在step4-1前面)

    # Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR}
    # If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console.
    
    python ${MMDEPLOY_DIR}/tools/deploy.py \
        ${MMDEPLOY_DIR}/configs/mmdet/detection/detection_openvino_dynamic-300x300.py \
        ${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
        ${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
        ${MMDET_DIR}/demo/demo.jpg \
        --work-dir work_dirs \  # 转换好的模型保存目录
        --device cpu \
        --show \  # 展示使用后端推理框架,和原来pytorch推理的两张图片
        --dump-info  # 输出,可用与SDK
      
    # 补充    
    # ${MMDEPLOY_DIR}和${MMDET_DIR}已经写进了~/.bashrc
    # 转换好了模型可以通过python接口进行推理
    例如: Inference Model
    Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
    
    from mmdeploy.apis import inference_model
    
    result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)
    
    例子:
    deploy_cfg = "/home/zranguai/Deploy/MMDeploy/configs/mmdet/detection/detection_openvino_dynamic-300x300.py"
    model_cfg = "/home/zranguai/Deploy/mmdetection/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py"
    backend_files = [ "/home/zranguai/Deploy/MMDeploy/work_dirs/end2end.xml",]
    img = "/home/zranguai/Deploy/mmdetection/demo/demo.jpg"
    device = 'cpu'
    
    from mmdeploy.apis import visualize_model
    visualize_model(model_cfg, deploy_cfg, backend_files[0], img, device, show_result=True)
    

    python ${MMDEPLOY_DIR}/tools/check_env.pyIf the script runs successfully, two images will display on the screen one by one. The first image is the infernce result of OpenVINO and the second image is the result of PyTorch. At the same time, an onnx model file end2end.onnx and ene2end.bin(contains the weights and biases binary data) end2end.xml(describes the networks topology) end2end.mapping deploy.json detail.json pipeline.json (SDK config files) will generate on the work directory work_dirs.

    Step6: Run MMDeploy SDK demo(for openvino)

    After model conversion, SDK Model is saved in directory ${work_dir}.
    Here is a recipe for building & running object detection demo.

    cd build/install/example
    
    # path to openvino ** libraries **
    export LD_LIBRARY_PATH=/path/to/onnxruntime/lib/intel64
    # 例子: export LD_LIBRARY_PATH=/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64
    
    mkdir -p build && cd build
    cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \   # openvino中opencv路径
          -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
    make object_detection
    
    
    
    # 例子2:
    # cmake -DOpenCV_DIR=/opt/intel/openvino_2021/opencv/cmake \
    #       -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
    
    # suppress verbose logs
    export SPDLOG_LEVEL=warn
    
    # running the object detection example
    ./object_detection cpu ${work_dirs} ${path/to/an/image}
    # 例子: ./object_detection cpu ${MMDEPLOY_DIR}/work_dirs ${MMDET_DIR}/demo/demo.jpg
    # 可能出现的错误: 上面导出的xml的name: torch-jit-export version="11",解决: 重新安装就好了
    

    If the demo runs successfully, an image named "output_detection.png" is supposed to be found showing detection objects.

  • 相关阅读:
    vue慕课网音乐项目手记:9-封装一个公用的scroll组件
    vue慕课网音乐项目手记:50-搜索列表的点击删除、删除全部的交互事件
    vue慕课网音乐项目手记:48-搜索历史数据的处理
    vue慕课网音乐项目手记:6-手写滚动轮播图(中)
    vue慕课网音乐项目手记:5-手写滚动轮播图(上)
    vue慕课网音乐项目手记:30-音乐环形进度条的实现
    vue慕课网音乐项目手记:45-搜索页面跳转歌手页面
    基于Vue2.0的音乐播放器(2)——歌手模块
    linux学习笔记-(1)-安装
    linux学习笔记-前篇
  • 原文地址:https://www.cnblogs.com/zranguai/p/16213922.html
Copyright © 2020-2023  润新知