• 基于建模的视觉定位(SFM-Based Positioning)


    具体方法来自我参与的这篇journal:

    Vision-Based Positioning for Internet-of-Vehicles, IEEE Transactions on Intelligent Transportation Systems, 2016.

    过程:基于图像的3D建模 --> 点云压缩 --> 3D-2D的匹配

    Method:http://www.clarenceliang.com/positioning

    Dataset:http://www.clarenceliang.com/dataset

     




    :以下內容是個人筆記,source code目前還沒有release出來。但其中VisualSFM和2D-3D Matching的部份是公開的,可以在他們的主頁找到。

    Introduction


    images/:   put your training images in this folder

    testImages/:   put your testing images in this folder

    bundle/:   put the .out file generated by visualSFM in this folder

    file_gen/:   files generated in the compression step

    result/:   results generated in the localization step

    work_flow_2.m:   do the model compression

    BatchLocalizer.sh:   script to do the localization of the test images

    simple_test.m:   generate the test result without ground truth 

    bash_test.m:    generate the test result with ground truth



    Steps to use the code


    • Training Phase

    1. Install visualSFM

    http://ccwu.me/vsfm/

    2. Open VisualSFM

    3. Load Images

    File->Open + Multi Images->select the training images at ‘Positioning/images/’

    4. Feature Matching

    Click on ‘Compute Missing Matches’ 

    5. 3D Reconstruction

    Click on ‘Compute 3D Reconstruction’

    6.  Re-order the cameras

    Hit ENTER-> ‘sort’->ENTER

    7. Save Results  

    Sfm->Extra Functions->Save Current Model->‘bundle.out’ at ‘Positioning/bundle/’

      Save Current Cameras-> ‘list.txt’ at ‘Positioning/’

    8. Close VisualSFM

    9. Model Compression

    Command line -> ./bin/siftb2a list.txt

    Execute ‘work_flow_2.m’ in Matlab,record the value of variablepwk.



    • Testing Phase

    10. Localization

    Command line -> ./BatchLocalizer.sh bundle/bundle.out list.txt file_gen/cluster_k_185.txt 100(10^pwk)testImages/(path of testing images)result/(path of results) 100(testing times) 0.4 100

    11.  Show Testing Result

    Execute ‘simple_test.m’ in Matlab, get ‘trainM.mat’(positions of training images), ‘testM.mat’(positions of testing images), ‘point_position.mat’(positions of model points), ’point_color.mat’(colors of model points).

    12. Compute The Error With Ground Truth(optional)

    Write the positions of the ground truth images into a new matrix ‘trainR.mat’. Then only keep the corresponding rows in ‘trainM’ and delete the others. Write the positions of the testing images into a new matrix ‘testR.mat’. 

    Execute ‘bash_test.m’ in Matlab, getrefand dev. Variable refstores the position of each test image in the ground truth coordinate.Dev is the error between each test image and the ground truth.

     

  • 相关阅读:
    装饰
    统一软件开发过程之2:用例文本书写
    统一软件开发过程之1:创建领域模型
    工厂方法
    volatile
    中介者
    建造者
    C#委托,事件与回调函数
    控件资源嵌入
    装饰
  • 原文地址:https://www.cnblogs.com/clarenceliang/p/6544902.html
Copyright © 2020-2023  润新知