- 上午把tensorflow安装好了,下午和晚上装caffe的确很费劲。
- 默认CUDA,cuDNN可以用了
- caffe官方安装教程
- 有些安装顺序自己也不清楚,简直就是碰运气
1. 安装之前依赖项
General dependencies
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler sudo apt-get install --no-install-recommends libboost-all-dev
安装matlab见后面:
为什么需要安装Matlab?
caffe有Matlab的接口,因此如果需要使用Matlab调用caffe,进行编程,就需要安装Matlab。如果你觉得使用C或Python编程比较难,就请安装Matlab。当然如果不需要,并且后面不会编译caffe生成Matlab的接口,就不需要安装Matlab了。这个纯粹根据个人需求来定。
为什么需要安装OpenCV?
caffe是用来做深度学习的,深度学习的一大应用对象就是图像和视频。而OpenCV是目前最火的开源计算机视觉库,非常多的项目多用到了OpenCV,当然caffe也依赖OpenCV。所以,需要安装OpenCV,否则无法使用caffe哦
OpenCV的版本和cuda的版本最好匹配。这样子安排的目的是为了减少错误出现的概率
2.OpeCV安装
从官网(http://opencv.org/downloads.html)下载Opencv,并将其解压到你要安装的位置,假设解压到了/home/opencv。 安装前准备,创建编译文件夹:
cd ~/opencv
mkdir build
cd build
配置:
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local ..
编译:
make -j8 #-j8表示并行计算,根据自己电脑的配置进行设置,配置比较低的电脑可以将数字改小或不使用,直接输make。
以上只是将opencv编译成功,还没将opencv安装,需要运行下面指令进行安装:
sudo make install
问题:由于CUDA 8.0不支持OpenCV的 GraphCut 算法,可能出现以下错误:
/home/dsp/opencv-3.1.0/modules/core/include/opencv2/core/private.cuda.hpp:165:52: note: in definition of macro ‘nppSafeCall’ #define nppSafeCall(expr) cv::cuda::checkNppError(expr, __FILE__, __LINE__, CV_Func) ^ modules/cudalegacy/CMakeFiles/opencv_cudalegacy.dir/build.make:146: recipe for target 'modules/cudalegacy/CMakeFiles/opencv_cudalegacy.dir/src/graphcuts.cpp.o' failed make[2]: *** [modules/cudalegacy/CMakeFiles/opencv_cudalegacy.dir/src/graphcuts.cpp.o] Error 1 make[2]: *** 正在等待未完成的任务.... CMakeFiles/Makefile2:9226: recipe for target 'modules/cudalegacy/CMakeFiles/opencv_cudalegacy.dir/all' failed make[1]: *** [modules/cudalegacy/CMakeFiles/opencv_cudalegacy.dir/all] Error 2 make[1]: *** 正在等待未完成的任务.... [ 92%] Linking CXX shared library ../../lib/libopencv_photo.so [ 92%] Built target opencv_photo Makefile:160: recipe for target 'all' failed make: *** [all] Error 2
进入opencv-3.1.0/modules/cudalegacy/src/目录,修改graphcuts.cpp文件,将:
#include "precomp.hpp" #if !defined (HAVE_CUDA) || defined (CUDA_DISABLER) 改为 #include "precomp.hpp" #if !defined (HAVE_CUDA) || defined (CUDA_DISABLER) || (CUDART_VERSION >= 8000) 然后make编译就可以了
- 编译和安装完成
装BLAS
这里可以选择(ATLAS,MKL或者OpenBLAS):不知道这个,下载有问题,所以就没有搞这个,但是makefile.config文件里面有配置
MKL首先下载并安装英特尔® 数学内核库 Linux* 版MKL(Intel(R) Parallel Studio XE Cluster Edition for Linux 2016),下载链接是:https://software.intel.com/en-us/qualify-for-free-software/student, 使用学生身份(邮件 + 学校)下载Student版,填好各种信息,可以直接下载,同时会给你一个邮件告知序列号。
后面就直接:sudo apt-get install libatlas-base-dev -y
sudo apt-get install libatlas-base-dev
3.MATLAB2017a安装
- utuntu2017a安装:http://blog.csdn.net/u011713358/article/details/69659265
- 安装过程很详细,按照流程就可以按照成功。
4.安装caffe
(1)将终端cd到要安装caffe的位置。 (2)从github上获取caffe: git clone https://github.com/BVLC/caffe.git 注意:若没有安装Git,需要先安装Git: sudo apt-get install git (3)因为make指令只能make Makefile.config文件,而Makefile.config.example是caffe给出的makefile例子,因此,首先将Makefile.config.example的内容复制到Makefile.config: sudo cp Makefile.config.example Makefile.config (4)打开并修改配置文件: sudo gedit Makefile.config #打开Makefile.config文件
- 自己按照改了一下,里面有写需要注意
## Refer to http://caffe.berkeleyvision.org/installation.html # Contributions simplifying and improving our build system are welcome! # cuDNN acceleration switch (uncomment to build with cuDNN). USE_CUDNN := 1 # CPU-only switch (uncomment to build without GPU support). # CPU_ONLY := 1 # uncomment to disable IO dependencies and corresponding data layers USE_OPENCV := 1 # USE_LEVELDB := 0 # USE_LMDB := 0 # uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary) # You should not set this flag if you will be reading LMDBs with any # possibility of simultaneous read and write # ALLOW_LMDB_NOLOCK := 1 # Uncomment if you're using OpenCV 3 OPENCV_VERSION := 3 # To customize your choice of compiler, uncomment and set the following. # N.B. the default for Linux is g++ and the default for OSX is clang++ # CUSTOM_CXX := g++ # CUDA directory contains bin/ and lib/ directories that we need. CUDA_DIR := /usr/local/cuda # On Ubuntu 14.04, if cuda tools are installed via # "sudo apt-get install nvidia-cuda-toolkit" then use this instead: # CUDA_DIR := /usr # CUDA architecture setting: going with all of them. # For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility. # For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility. CUDA_ARCH := -gencode arch=compute_20,code=sm_20 -gencode arch=compute_20,code=sm_21 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_61,code=compute_61 # BLAS choice: # atlas for ATLAS (default) # mkl for MKL # open for OpenBlas BLAS := atlas # Custom (MKL/ATLAS/OpenBLAS) include and lib directories. # Leave commented to accept the defaults for your choice of BLAS # (which should work)! # BLAS_INCLUDE := /path/to/your/blas # BLAS_LIB := /path/to/your/blas # Homebrew puts openblas in a directory that is not on the standard search path # BLAS_INCLUDE := $(shell brew --prefix openblas)/include # BLAS_LIB := $(shell brew --prefix openblas)/lib # This is required only if you will compile the matlab interface. # MATLAB directory should contain the mex binary in /bin. MATLAB_DIR := /home/dsp #MATLAB_DIR := /home/dsp/bin # NOTE: this is required only if you will compile the python interface. # We need to be able to find Python.h and numpy/arrayobject.h. # PYTHON_INCLUDE := /usr/include/python2.7 # /usr/lib/python2.7/dist-packages/numpy/core/include # Anaconda Python distribution is quite popular. Include path: # Verify anaconda location, sometimes it's in root. ANACONDA_HOME := /home/dsp/anaconda2 PYTHON_INCLUDE := $(ANACONDA_HOME)/include $(ANACONDA_HOME)/include/python2.7 $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include # Uncomment to use Python 3 (default is Python 2) # PYTHON_LIBRARIES := boost_python3 python3.5m # PYTHON_INCLUDE := /usr/include/python3.5m # /usr/lib/python3.5/dist-packages/numpy/core/include # We need to be able to find libpythonX.X.so or .dylib. # PYTHON_LIB := /usr/lib PYTHON_LIB := $(ANACONDA_HOME)/lib # Homebrew installs numpy in a non standard path (keg only) # PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include # PYTHON_LIB += $(shell brew --prefix numpy)/lib # Uncomment to support layers written in Python (will link against Python libs) WITH_PYTHON_LAYER := 1 # Whatever else you find you need goes here. #INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/ #LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include /usr/lib/x86_64-linux-gnu/hdf5/serial/include LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies # INCLUDE_DIRS += $(shell brew --prefix)/include # LIBRARY_DIRS += $(shell brew --prefix)/lib # NCCL acceleration switch (uncomment to build with NCCL) # https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0) # USE_NCCL := 1 # Uncomment to use `pkg-config` to specify OpenCV library paths. # (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.) # USE_PKG_CONFIG := 1 # N.B. both build and distribute dirs are cleared on `make clean` BUILD_DIR := build DISTRIBUTE_DIR := distribute # Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171 # DEBUG := 1 # The ID of the GPU that 'make runtest' will use to run unit tests. TEST_GPUID := 0 # enable pretty build (comment to see full commands) Q ?= @ LINKFLAGS := -Wl,-rpath,$(HOME)/anaconda2/lib
## Refer to http://caffe.berkeleyvision.org/installation.html # Contributions simplifying and improving our build system are welcome! # cuDNN acceleration switch (uncomment to build with cuDNN). USE_CUDNN := 1 # CPU-only switch (uncomment to build without GPU support). # CPU_ONLY := 1 # uncomment to disable IO dependencies and corresponding data layers USE_OPENCV := 0 # USE_LEVELDB := 0 # USE_LMDB := 0 # uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary) # You should not set this flag if you will be reading LMDBs with any # possibility of simultaneous read and write # ALLOW_LMDB_NOLOCK := 1 # Uncomment if you're using OpenCV 3 OPENCV_VERSION := 3 # To customize your choice of compiler, uncomment and set the following. # N.B. the default for Linux is g++ and the default for OSX is clang++ # CUSTOM_CXX := g++ # CUDA directory contains bin/ and lib/ directories that we need. CUDA_DIR := /usr/local/cuda # On Ubuntu 14.04, if cuda tools are installed via # "sudo apt-get install nvidia-cuda-toolkit" then use this instead: # CUDA_DIR := /usr # CUDA architecture setting: going with all of them. # For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility. # For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility. CUDA_ARCH := -gencode arch=compute_20,code=sm_20 -gencode arch=compute_20,code=sm_21 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_61,code=compute_61 # BLAS choice: # atlas for ATLAS (default) # mkl for MKL # open for OpenBlas BLAS := atlas # Custom (MKL/ATLAS/OpenBLAS) include and lib directories. # Leave commented to accept the defaults for your choice of BLAS # (which should work)! # BLAS_INCLUDE := /path/to/your/blas # BLAS_LIB := /path/to/your/blas # Homebrew puts openblas in a directory that is not on the standard search path # BLAS_INCLUDE := $(shell brew --prefix openblas)/include # BLAS_LIB := $(shell brew --prefix openblas)/lib # This is required only if you will compile the matlab interface. # MATLAB directory should contain the mex binary in /bin. # MATLAB_DIR := /usr/local # MATLAB_DIR := /Applications/MATLAB_R2012b.app # NOTE: this is required only if you will compile the python interface. # We need to be able to find Python.h and numpy/arrayobject.h. # PYTHON_INCLUDE := /usr/include/python2.7 /usr/lib/python2.7/dist-packages/numpy/core/include # Anaconda Python distribution is quite popular. Include path: # Verify anaconda location, sometimes it's in root. ANACONDA_HOME := /home/dsp/anaconda2 PYTHON_INCLUDE := $(ANACONDA_HOME)/include $(ANACONDA_HOME)/include/python2.7 $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include # Uncomment to use Python 3 (default is Python 2) # PYTHON_LIBRARIES := boost_python3 python3.5m # PYTHON_INCLUDE := /usr/include/python3.5m # /usr/lib/python3.5/dist-packages/numpy/core/include # We need to be able to find libpythonX.X.so or .dylib. PYTHON_LIB := /usr/lib # PYTHON_LIB := $(ANACONDA_HOME)/lib # Homebrew installs numpy in a non standard path (keg only) # PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include # PYTHON_LIB += $(shell brew --prefix numpy)/lib # Uncomment to support layers written in Python (will link against Python libs) WITH_PYTHON_LAYER := 1 # Whatever else you find you need goes here. #INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/ #LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include /usr/lib/x86_64-linux-gnu/hdf5/serial/include LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies # INCLUDE_DIRS += $(shell brew --prefix)/include # LIBRARY_DIRS += $(shell brew --prefix)/lib # NCCL acceleration switch (uncomment to build with NCCL) # https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0) # USE_NCCL := 1 # Uncomment to use `pkg-config` to specify OpenCV library paths. # (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.) # USE_PKG_CONFIG := 1 # N.B. both build and distribute dirs are cleared on `make clean` BUILD_DIR := build DISTRIBUTE_DIR := distribute # Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171 # DEBUG := 1 # The ID of the GPU that 'make runtest' will use to run unit tests. TEST_GPUID := 0 # enable pretty build (comment to see full commands) Q ?=
问题:
第一次编译:出错
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler sudo apt-get install --no-install-recommends libboost-all-dev AR -o .build_release/lib/libcaffe.a LD -o .build_release/lib/libcaffe.so.1.0.0 /usr/bin/ld: 找不到 -lhdf5_hl /usr/bin/ld: 找不到 -lhdf5 /usr/bin/ld: 找不到 -lcudnn collect2: error: ld returned 1 exit status Makefile:572: recipe for target '.build_release/lib/libcaffe.so.1.0.0' failed make: *** [.build_release/lib/libcaffe.so.1.0.0] Error 1
- hdf5的问题,通过修改Makefile.config文件
在文件里面添加文本由于hdf5库目录更改,所以需要单独添加: #INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/ #LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include /usr/lib/x86_64-linux-gnu/hdf5/serial/include LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial
- 然后再次编译有一个问题:
dsp@dsp:~/caffe$ make all -j16 LD -o .build_release/lib/libcaffe.so.1.0.0 /usr/bin/ld: 找不到 -lcudnn collect2: error: ld returned 1 exit status Makefile:572: recipe for target '.build_release/lib/libcaffe.so.1.0.0' failed make: *** [.build_release/lib/libcaffe.so.1.0.0] Error 1
-
i found that in the path "/usr/local/cuda/lib64/" don't have the file liblcudnn.so
该问题还在有待解决。
- 这个问题其实挺简单的:后面自己想清楚了:就是cudnn的链接问题,重新拷贝cudnn文件;然后链接了一遍,后面就不报这个错了
- 继续编译错误:
//home/dsp/anaconda2/lib/libpng16.so.16:对‘inflateValidate@ZLIB_1.2.9’未定义的引用/ /homecollect2: error: ld returned 1 exit status /dsp/anaconda2/lib/libpng16.so.16:对‘inflateValidate@ZLIB_1.2.9’�Makefile:625: recipe for target '.build_release/tools/upgrade_net_proto_text.bin' failed make: *** [.build_release/tools/upgrade_net_proto_text.bin] Error 1 �make: *** 正在等待未完成的任务.... �定义的引用 collect2: error: ld returned 1 exit status Makefile:625: recipe for target '.build_release/tools/upgrade_net_proto_binary.bin' failed make: *** [.build_release/tools/upgrade_net_proto_binary.bin] Error 1 //home/dsp/anaconda2/lib/libpng16.so.16:对‘inflateValidate@ZLIB_1.2.9’未定义的引用 collect2: error: ld returned 1 exit status Makefile:625: recipe for target '.build_release/tools/upgrade_solver_proto_text.bin' failed make: *** [.build_release/tools/upgrade_solver_proto_text.bin] Error 1 //home/dsp/anaconda2/lib/libpng16.so.16:对‘inflateValidate@ZLIB_1.2.9’未定义的引用 collect2: error: ld returned 1 exit status
- 然后我添加链接:sudo ln -s /home/username/anaconda2/lib/libpng16.so.16 libpng16.so.16 (方法不行)报另外的错:
/usr/local/cuda-8.0/lib64/libpng16.so.16:对‘inflateValidate@ZLIB_1.2.9’未定义的引用 collect2: error: ld returned 1 exit status Makefile:625: recipe for target '.build_release/tools/upgrade_net_proto_binary.bin' failed make: *** [.build_release/tools/upgrade_net_proto_binary.bin] Error 1 make: *** 正在等待未完成的任务.... /usr/local/cuda-8.0/lib64/libpng16.so.16:对‘inflateValidate@ZLIB_1.2.9’未定��/usr�的引用 /localcollect2: error: ld returned 1 exit status /cuda-8.0/lib64/libpng16.so.16:对‘inflateValidate@ZLIB_1.2.9’未定义/的引用 collect2: error: ld returned 1 exit status usr/local/cudaMakefile:630: recipe for target '.build_release/examples/siamese/convert_mnist_siamese_data.bin' failed -make: *** [.build_release/examples/siamese/convert_mnist_siamese_data.bin] Error 1 8.0/lib64/libpng16.so.16:对‘inflateValidate@ZLIB_1.2.9’未定义的引用 collect2: error: ld returned 1 exit status
- 最后非常感谢:Caffe 编译错误记录:http://blog.csdn.net/ruotianxia/article/details/78437464
- 里面的几个错误有代表性,按照下面的方法就没有报这个错了
在 Makefile.config 中,加入下一句
LINKFLAGS := -Wl,-rpath,$(HOME)/anaconda2/lib
- 然后执行:make all 报错:
dsp@dsp:~/caffe$ make all -j16 make: Nothing to be done for 'all'
- 解决方法很简单:
-
make: Nothing to be done for `all' 解决方法 1.这句提示是说明你已经编译好了,而且没有对代码进行任何改动。 若想重新编译,可以先删除以前编译产生的目标文件: make clean make
5. 黎明的曙光
- 按照如下编译顺序
make all -j16 make runtest -j16 make pycaffe -j16 make matcaffe -j16
- 其中make all 和make runtest时间比较长;make pycaffe 很顺利
[----------] Global test environment tear-down [==========] 2123 tests from 281 test cases ran. (285688 ms total) [ PASSED ] 2123 tests. dsp@dsp:~/caffe$ make pycaffe -j16 touch python/caffe/proto/__init__.py CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp PROTOC (python) src/caffe/proto/caffe.proto
- 实际使用pycaffe,出错:
dsp@dsp:~/caffe$ python Python 2.7.14 |Anaconda, Inc.| (default, Oct 16 2017, 17:29:19) [GCC 7.2.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> caffe_root="/home/dsp/caffe/" >>> sys.path.insert(0,caffe_root+'python') >>> import caffe Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/dsp/caffe/python/caffe/__init__.py", line 1, in <module> from .pycaffe import Net, SGDSolver, NesterovSolver, AdaGradSolver, RMSPropSolver, AdaDeltaSolver, AdamSolver, NCCL, Timer File "/home/dsp/caffe/python/caffe/pycaffe.py", line 15, in <module> import caffe.io File "/home/dsp/caffe/python/caffe/io.py", line 8, in <module> from caffe.proto import caffe_pb2 File "/home/dsp/caffe/python/caffe/proto/caffe_pb2.py", line 6, in <module> from google.protobuf.internal import enum_type_wrapper ImportError: No module named google.protobuf.internal
- 通过conda下安装protobuf即可
-
python caffe报错:No module named google.protobuf.internal 我装的是anaconda2, 解决方法是在其中安装protobuf最新版本 conda install protobuf
6. MNIST数据集测试
配置caffe完成后,我们可以利用MNIST数据集对caffe进行测试,过程如下:
1.将终端定位到Caffe根目录
cd ~/caffe
2.下载MNIST数据库并解压缩
./data/mnist/get_mnist.sh
3.将其转换成Lmdb数据库格式
./examples/mnist/create_mnist.sh
4.训练网络
./examples/mnist/train_lenet.sh
训练的时候可以看到损失与精度数值,如下图:
- make matcaffe 有gcc版本问题
dsp@dsp:~/caffe$ make matcaffe -j16 MEX matlab/+caffe/private/caffe_.cpp 使用 'g++' 编译。 警告: 您使用的 gcc 版本为 '5.4.0'。不支持该版本的 gcc。MEX 当前支持的版本为 '4.9.x'。有关当前支持的编译器列表,请参阅: http://www.mathworks.com/support/compilers/current_release。 MEX 已成功完成。
-
解决办法是: 在Makefile里面,大约第410行那一句话 CXXFLAGS += -MMD -MP 下面添加CXXFLAGS += -std=c++11, 最后是这样 CXXFLAGS += -MMD -MP CXXFLAGS += -std=c++11 然后在caffe根目录下make clean,make all
- 执行 make mattest的时候,报错:
....... b/+caffe/private/caffe_.mexa64' 需要的符号 '_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEED1Ev' 缺少 '/usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.58.0->/home/dsp/caffe/matlab/+caffe/private/caffe_.mexa64' 需要的符号 '_ZNSt7__cxx1112basic_stringIwSt11char_traitsIwESaIwEE12_M_constructEmw' 缺少 '/usr/lib/x86_64-linux-gnu/libboost_python-py27.so.1.58.0->/home/dsp/caffe/matlab/+caffe/private/caffe_.mexa64' 需要的符号 '_ZNSt7__cxx1112basic_stringIwSt11char_traitsIwESaIwEE9_M_createERmm'。 出错 caffe.set_mode_cpu (line 5) caffe_('set_mode_cpu'); 出错 caffe.run_tests (line 6) caffe.set_mode_cpu();
- 参考:Caffe中使用MATLAB接口
- 最后设置调用caffe/python的路径,可以在任意路径终端下导入caffe
- 经过差不多两天的时间,安装了很多东西,情形庆幸没有重装系统,具体的内容如下:
cuda: /usr/local/ opencv_3.1: /usr/local/ anaconda2,caffe: /home/dsp/ python系统默认:2.7 anaconda:2.7 ;虚拟环境下tensorflow_py3.5 matlab2017a: /home/dsp/bin/matlab caffe: /home/dsp/caffe 使用方法: ------ matlab2017a: 终端输入: matlab即可,界面有问题,待解决 ------ 默认终端python: dsp@dsp:~$ python Python 2.7.14 |Anaconda, Inc.| (default, Oct 16 2017, 17:29:19) [GCC 7.2.0] on linux2 ------ 终端输入:spyder python为:anaconda自带的python2.7 ------ tensorflow1.4 + python3.5使用: dsp@dsp:~$ source activate tensorflow_py3.5 (tensorflow_py3.5) dsp@dsp:~$ spyder 注: 1. 需要不同的python环境,需要自己创建虚拟环境 2. 安装依赖项时注意,安装的位置 3. 也可以通过:(tensorflow_py3.5) dsp@dsp:~$ anaconda-navigator 来安装和启动spyder ------ pycharm 使用: 1. 解压安装包可直接使用 2. 运行:(tensorflow_py3.5) dsp@dsp:~$ sh ./pycharm/bin/pycharm.sh ;只要路径对即可 3. 设置解释器为:python2.7 或者tensorflow_py3.5 ------ caffe 使用: 1. 使用anaconda自带的python2.7即可 2. 添加caffe的路径,再使用 3. 本机可以在任意路径终端下:输入:python; 然后:import caffe
Reference:
安装ubuntu16.04+cuda8.0+caffe+python+matlab+opencv3.0
http://blog.csdn.net/shiorioxy/article/details/52652831
http://blog.csdn.net/u012841667/article/details/53572431(makefile.config各代码配置说明)