• 【OpenCV学习】计算两幅图像的重叠区域


    问题描述:已知两幅图像Image1和Image2,计算出两幅图像的重叠区域,并在Image1和Image2标识出重叠区域。

    算法思想:

    若两幅图像存在重叠区域,则进行图像匹配后,会得到一张完整的全景图,因而可以转换成图像匹配问题。

    图像匹配问题,可以融合两幅图像,得到全景图,但无法标识出在原图像的重叠区域。

    将两幅图像都理解为多边形,则其重叠区域的计算,相当于求多边形的交集。

    通过多边形求交,获取重叠区域的点集,然后利用单应矩阵还原在原始图像的点集信息,从而标识出重叠区域。

    算法步骤:

    1.图像匹配计算,获取单应矩阵。

    2.根据单应矩阵,计算图像2的顶点转换后的点集。

    3.由图像1的顶点集合和图像2的转换点集,计算多边形交集。

    4.根据单应矩阵的逆,计算多边形的交集在图像2中的原始点集。

    代码实现如下所示:

      1 bool ImageOverlap(cv::Mat &img1,cv::Mat &img2,std::vector<cv::Point> &vPtsImg1,std::vector<cv::Point> &vPtsImg2)
      2 {
      3     cv::Mat g1(img1,Rect(0,0,img1.cols,img1.rows));
      4     cv::Mat g2(img2,Rect(0,0,img2.cols,img2.rows));
      5 
      6     cv::cvtColor(g1,g1,CV_BGR2GRAY);
      7     cv::cvtColor(g2,g2,CV_BGR2GRAY);
      8 
      9     std::vector<cv::KeyPoint> keypoints_roi, keypoints_img;  /* keypoints found using SIFT */
     10     cv::Mat descriptor_roi, descriptor_img;             /* Descriptors for SIFT */
     11     cv::FlannBasedMatcher matcher;                      /* FLANN based matcher to match keypoints */
     12     std::vector<cv::DMatch> matches, good_matches;
     13     cv::SIFT sift;
     14     int i, dist=80;
     15 
     16     sift(g1, Mat(), keypoints_roi, descriptor_roi);      /* get keypoints of ROI image */
     17     sift(g2, Mat(), keypoints_img, descriptor_img);         /* get keypoints of the image */
     18     matcher.match(descriptor_roi, descriptor_img, matches);
     19 
     20     double max_dist = 0; double min_dist = 1000;
     21 
     22     //-- Quick calculation of max and min distances between keypoints
     23     for( int i = 0; i < descriptor_roi.rows; i++ )
     24     { 
     25         double dist = matches[i].distance;
     26         if( dist < min_dist ) min_dist = dist;
     27         if( dist > max_dist ) max_dist = dist;
     28     }
     29 
     30     for (i=0; i < descriptor_roi.rows; i++)
     31     {
     32         if (matches[i].distance < 3*min_dist)
     33         {
     34             good_matches.push_back(matches[i]);
     35         }
     36     }
     37 
     38     //printf("%ld no. of matched keypoints in right image
    ", good_matches.size());
     39     /* Draw matched keypoints */
     40 
     41     //Mat img_matches;
     42     //drawMatches(img1, keypoints_roi, img2, keypoints_img, 
     43     //    good_matches, img_matches, Scalar::all(-1), 
     44     //    Scalar::all(-1), vector<char>(), 
     45     //    DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
     46     //imshow("matches",img_matches);
     47 
     48     vector<Point2f> keypoints1, keypoints2; 
     49     for (i=0; i<good_matches.size(); i++)
     50     {
     51         keypoints1.push_back(keypoints_img[good_matches[i].trainIdx].pt);
     52         keypoints2.push_back(keypoints_roi[good_matches[i].queryIdx].pt);
     53     }
     54     //计算单应矩阵
     55     Mat H = findHomography( keypoints1, keypoints2, CV_RANSAC );
     56 
     57     //show stitchImage
     58     // cv::Mat stitchedImage;
     59     // int mRows = img2.rows;
     60     // if (img1.rows> img2.rows)
     61     // {
     62         // mRows = img1.rows;
     63     // }
     64     // stitchedImage = Mat::zeros(img2.cols+img1.cols, mRows, CV_8UC3);
     65     // warpPerspective(img2,stitchedImage,H,Size(img2.cols+img1.cols,mRows));
     66     // Mat half(stitchedImage,Rect(0,0,img1.cols,img1.rows));
     67     // img1.copyTo(half);
     68     // imshow("stitchedImage",stitchedImage);
     69 
     70     std::vector<cv::Point> vSrcPtsImg1;
     71     std::vector<cv::Point> vSrcPtsImg2;
     72 
     73     vSrcPtsImg1.push_back(cv::Point(0,0));
     74     vSrcPtsImg1.push_back(cv::Point(0,img1.rows));
     75     vSrcPtsImg1.push_back(cv::Point(img1.cols,img1.rows));
     76     vSrcPtsImg1.push_back(cv::Point(img1.cols,0));
     77 
     78     vSrcPtsImg2.push_back(cv::Point(0,0));
     79     vSrcPtsImg2.push_back(cv::Point(0,img2.rows));
     80     vSrcPtsImg2.push_back(cv::Point(img2.cols,img2.rows));
     81     vSrcPtsImg2.push_back(cv::Point(img2.cols,0));
     82 
     83     //计算图像2在图像1中对应坐标信息
     84     std::vector<cv::Point> vWarpPtsImg2;
     85     for(int i = 0;i < vSrcPtsImg2.size();i++ )
     86     {
     87         cv::Mat srcMat = Mat::zeros(3,1,CV_64FC1);
     88         srcMat.at<double>(0,0) = vSrcPtsImg2[i].x;
     89         srcMat.at<double>(1,0) = vSrcPtsImg2[i].y;
     90         srcMat.at<double>(2,0) = 1.0;
     91         
     92         cv::Mat warpMat = H * srcMat;
     93         cv::Point warpPt;
     94         warpPt.x = cvRound(warpMat.at<double>(0,0)/warpMat.at<double>(2,0));
     95         warpPt.y = cvRound(warpMat.at<double>(1,0)/warpMat.at<double>(2,0));
     96 
     97         vWarpPtsImg2.push_back(warpPt);
     98     }
     99     //计算图像1和转换后的图像2的交点
    100     if(!PolygonClip(vSrcPtsImg1,vWarpPtsImg2,vPtsImg1))
    101         return false;
    102 
    103     for (int i = 0;i < vPtsImg1.size();i++)
    104     {
    105         cv::Mat srcMat = Mat::zeros(3,1,CV_64FC1);
    106         srcMat.at<double>(0,0) = vPtsImg1[i].x;
    107         srcMat.at<double>(1,0) = vPtsImg1[i].y;
    108         srcMat.at<double>(2,0) = 1.0;
    109 
    110         cv::Mat warpMat = H.inv() * srcMat;
    111         cv::Point warpPt;
    112         warpPt.x = cvRound(warpMat.at<double>(0,0)/warpMat.at<double>(2,0));
    113         warpPt.y = cvRound(warpMat.at<double>(1,0)/warpMat.at<double>(2,0));
    114         vPtsImg2.push_back(warpPt);
    115     }
    116     return true;
    117 }
    View Code

    其中,多边形求交集可参考:http://www.cnblogs.com/dwdxdy/p/3232110.html

    最终,程序运行的示意图如下:

  • 相关阅读:
    打印杨辉三角 --JS
    (hdu step 8.1.6)士兵队列训练问题(数据结构,简单模拟——第一次每2个去掉1个,第二次每3个去掉1个.知道队伍中的人数&lt;=3,输出剩下的人 )
    黑马day16 jquery&amp;属性过滤选择器
    JQuery学习(4-2-phpserver端1)
    微信企业号开发:启用回调模式
    Struts框架的国际化
    4、libgdx应用框架
    C++map类型 之 简单介绍
    图像处理与计算机视觉开源软件库及学习站点
    单例模式
  • 原文地址:https://www.cnblogs.com/dwdxdy/p/3232331.html
Copyright © 2020-2023  润新知