• tensorflow学习笔记2:c++程序静态链接tensorflow库加载模型文件


    首先需要搞定tensorflow c++库,搜了一遍没有找到现成的包,于是下载tensorflow的源码开始编译;

    tensorflow的contrib中有一个makefile项目,极大的简化的接下来的工作;

    按照tensorflow makefile的说明文档,开始做c++库的编译:

    1. 下载依赖

    在tensorflow的项目顶层运行:

    tensorflow/contrib/makefile/download_dependencies.sh
    

    东西会下载到tensorflow/contrib/makefile/downloads/目录里;

    2. 在linux下进行编译

    首先确保编译工具都已经装好了:

    sudo apt-get install autoconf automake libtool curl make g++ unzip zlib1g-dev git python
    

    然后运行编译脚本;

    注意:运行之前打开看一眼,第一步竟然是把tensorflow/contrib/makefile/downloads/目录里的东西清空然后重新下载。。。注掉注掉

    tensorflow/contrib/makefile/build_all_linux.sh
    

    然后在tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a就看到静态库了;

    3. 准备好加载模型的c++代码

    #include "tensorflow/core/public/session.h"
    #include "tensorflow/core/platform/env.h"
    
    using namespace tensorflow;
    
    int main(int argc, char* argv[]) {
      // Initialize a tensorflow session
      Session* session;
      Status status = NewSession(SessionOptions(), &session);
      if (!status.ok()) {
        std::cout << status.ToString() << "
    ";
        return 1;
      }
    
      // Read in the protobuf graph we exported
      // (The path seems to be relative to the cwd. Keep this in mind
      // when using `bazel run` since the cwd isn't where you call
      // `bazel run` but from inside a temp folder.)
      GraphDef graph_def;
      status = ReadBinaryProto(Env::Default(), "models/test_graph.pb", &graph_def);
      if (!status.ok()) {
        std::cout << status.ToString() << "
    ";
        return 1;
      }
    
      // Add the graph to the session
      status = session->Create(graph_def);
      if (!status.ok()) {
        std::cout << status.ToString() << "
    ";
        return 1;
      }
    
      // Setup inputs and outputs:
    
      // Our graph doesn't require any inputs, since it specifies default values,
      // but we'll change an input to demonstrate.
      Tensor a(DT_FLOAT, TensorShape());
      a.scalar<float>()() = 3.0;
    
      Tensor b(DT_FLOAT, TensorShape());
      b.scalar<float>()() = 2.0;
    
      Tensor x(DT_FLOAT,TensorShape());
      x.scalar<float>()() = 10.0;
    
      std::vector<std::pair<string, tensorflow::Tensor>> inputs = {
        { "a", a },
        { "b", b },
        { "x", x },
      };
    
      // The session will initialize the outputs
      std::vector<tensorflow::Tensor> outputs;
    
      // Run the session, evaluating our "y" operation from the graph
      status = session->Run(inputs, {"y"}, {}, &outputs);
      if (!status.ok()) {
        std::cout << status.ToString() << "
    ";
        return 1;
      }
    
     // Grab the first output (we only evaluated one graph node: "c")
      // and convert the node to a scalar representation.
      auto output_y = outputs[0].scalar<float>();
    
      // (There are similar methods for vectors and matrices here:
      // https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/public/tensor.h)
    
      // Print the results
      std::cout << outputs[0].DebugString() << "
    "; // Tensor<type: float shape: [] values: 32>
      std::cout << output_y() << "
    "; // 32
    
      // Free any resources used by the session
      session->Close();
      return 0;
    }
    

    保存成load_graph.cc;

    写Makefile:

    TARGET_NAME := load_graph
    
    TENSORFLOW_MAKEFILE_DIR := /mnt/data/tensorflow/tensorflow/contrib/makefile
    
    INCLUDES := 
    -I /usr/local/lib/python3.6/dist-packages/tensorflow/include
    
    NSYNC_LIB := 
    $(TENSORFLOW_MAKEFILE_DIR)/downloads/nsync/builds/default.linux.c++11/nsync.a
    
    PROTOBUF_LIB := 
    $(TENSORFLOW_MAKEFILE_DIR)/gen/protobuf/lib/libprotobuf.a
    
    TENSORFLOW_CORE_LIB := 
    -Wl,--whole-archive $(TENSORFLOW_MAKEFILE_DIR)/gen/lib/libtensorflow-core.a -Wl,--no-whole-archive
    
    LIBS := 
    $(TENSORFLOW_CORE_LIB) 
    $(NSYNC_LIB) 
    $(PROTOBUF_LIB) 
    -lpthread 
    -ldl
    
    SOURCES := 
    load_graph.cc
    
    $(TARGET_NAME):
    	g++ -std=c++11 $(SOURCES) $(INCLUDES) -o $(TARGET_NAME) $(LIBS)
    
    clean:
    	rm $(TARGET_NAME)
    

    这里的tensorflow-core、nsync和protobuf全都用静态链接了,这些静态库以后考虑都放一份到系统目录下;

    有几个点需要注意:

    1) INCLUDE使用了python3.6的带的tensorflow头文件,只是觉得反正python都已经带头文件了,就不需要再另外拷一份头文件进系统目录了;

    2) nsync库是多平台的,因而可能需要仔细分析一下nsync的编译结果所在位置,尤其如果是交叉编译的话;

    3) 链接顺序不能错,tensorflow-core肯定要在其它两个前面;

    4) tensorflow_core库需要全链接进来,否则会出现这个错:tensorflow/core/common_runtime/session.cc:69] Not found: No session factory registered for the given session options: {target: "" config: } Registered factories are {}.

        想想也大概能知道为什么,肯定是在静态代码层面只依赖父类,然后再在运行时通过名字找子类,所以在符号层面是不直接依赖子类的,不强制whole-archive的话,子类一个都带不进来;

    4. 运行程序

    运行前先看看事先准备好的graph在不在预定位置,生成graph的方法见上一篇;

    运行一下,没啥好说的,结果正确。

    参考:

    http://blog.163.com/wujiaxing009@126/blog/static/7198839920174125748893/

    https://blog.csdn.net/xinchen1234/article/details/78750079

  • 相关阅读:
    通过注册表实现开机自启的取消
    数据库为什么要使用B+树
    PHP的一种缓存方案静态化
    wordpress源码阅读
    最近在搞的东西有点多Gradle,Python,java,groove搞的脑子都要炸了,还得了流感。满满的负能量。
    编写一个自己的PHP框架(一)写在前面
    cookie,session机制
    __autoload和spl_autoload_register区别
    _initialize()和__construct()
    在往数据库中插入复杂的字符串时,单双引号混用经常会搞的很乱
  • 原文地址:https://www.cnblogs.com/ZisZ/p/9145164.html
Copyright © 2020-2023  润新知