C++ 改进特征点与 OpenCV 的匹配

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/17967950/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-27 21:37:29  来源:igfitidea点击:

Improve matching of feature points with OpenCV

c++opencvmatchingfeature-detection

提问by filla2003

I want to match feature points in stereo images. I've already found and extracted the feature points with different algorithms and now I need a good matching. In this case I'm using the FAST algorithms for detection and extraction and the BruteForceMatcherfor matching the feature points.

我想匹配立体图像中的特征点。我已经用不同的算法找到并提取了特征点,现在我需要一个很好的匹配。在这种情况下,我使用 FAST 算法进行检测和提取以及BruteForceMatcher匹配特征点。

The matching code:

匹配代码:

vector< vector<DMatch> > matches;
//using either FLANN or BruteForce
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName);
matcher->knnMatch( descriptors_1, descriptors_2, matches, 1 );

//just some temporarily code to have the right data structure
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());  
for (size_t i = 0; i < matches.size(); ++i)
{ 
    good_matches2.push_back(matches[i][0]);     
}

Because there are a lot of false matches I caluclated the min and max distance and remove all matches that are too bad:

因为有很多错误匹配,我计算了最小和最大距离并删除了所有太糟糕的匹配:

//calculation of max and min distances between keypoints
double max_dist = 0; double min_dist = 100;
for( int i = 0; i < descriptors_1.rows; i++ )
{
    double dist = good_matches2[i].distance;
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
}

//find the "good" matches
vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{
    if( good_matches2[i].distance <= 5*min_dist )
    {
        good_matches.push_back( good_matches2[i]); 
    }
}

The problem is, that I either get a lot of false matches or only a few right ones (see the images below).

问题是,我要么得到很多错误匹配,要么只有少数正确匹配(见下图)。

many matches with bad results
(source: codemax.de)

many matches with bad results
(来源:codemax.de

only a few good matches
(source: codemax.de)

only a few good matches
(来源:codemax.de

I think it's not a problem of programming but more a matching thing. As far as I understood the BruteForceMatcheronly regards the visual distance of feature points (which is stored in the FeatureExtractor), not the local distance (x&y position), which is in my case important, too. Has anybody any experiences with this problem or a good idea to improve the matching results?

我认为这不是编程的问题,而是匹配的问题。据我所知,BruteForceMatcher只考虑特征点的视觉距离(存储在 中FeatureExtractor),而不是局部距离(x&y 位置),这在我的情况下也很重要。有没有人对此问题有任何经验或改善匹配结果的好主意?

EDIT

编辑

I changed the code, that it gives me the 50 best matches. After this I go through the first match to check, whether it's in a specified area. If it's not, I take the next match until I have found a match inside the given area.

我更改了代码,它给了我 50 个最佳匹配。在此之后,我通过第一场比赛来检查它是否在指定区域内。如果不是,我会参加下一场比赛,直到在给定区域内找到一场比赛。

vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName);
matcher->knnMatch( descriptors_1, descriptors_2, matches, 50 );

//look if the match is inside a defined area of the image
double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));

vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());  
for (size_t i = 0; i < matches.size(); ++i)
{ 
    for (int j = 0; j < matches[i].size(); j++)
    {
    //calculate local distance for each possible match
    Point2f from = keypoints_1[matches[i][j].queryIdx].pt;
    Point2f to = keypoints_2[matches[i][j].trainIdx].pt;        
    double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));
    //save as best match if local distance is in specified area
    if (dist < tresholdDist)
    {
        good_matches2.push_back(matches[i][j]);
        j = matches[i].size();
    }
}

I think I don't get more matches, but with this I'm able to remove more false matches:

我想我没有得到更多的匹配,但是这样我就可以删除更多的错误匹配:

less but better features
(source: codemax.de)

less but better features
(来源:codemax.de

采纳答案by filla2003

By comparing all feature detection algorithms I found a good combination, which gives me a lot more matches. Now I am using FAST for feature detection, SIFT for feature extraction and BruteForce for the matching. Combined with the check, whether the matches is inside a defined region I get a lot of matches, see the image:

通过比较所有特征检测算法,我发现了一个很好的组合,这给了我更多的匹配。现在我使用 FAST 进行特征检测,使用 SIFT 进行特征提取,使用 BruteForce 进行匹配。结合检查,匹配是否在定义的区域内我得到了很多匹配,看图片:

a lot of good matches with FAST and SIFT
(source: codemax.de)

a lot of good matches with FAST and SIFT
(来源:codemax.de

The relevant code:

相关代码:

Ptr<FeatureDetector> detector;
detector = new DynamicAdaptedFeatureDetector ( new FastAdjuster(10,true), 5000, 10000, 10);
detector->detect(leftImageGrey, keypoints_1);
detector->detect(rightImageGrey, keypoints_2);

Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("SIFT");
extractor->compute( leftImageGrey, keypoints_1, descriptors_1 );
extractor->compute( rightImageGrey, keypoints_2, descriptors_2 );

vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
matcher->knnMatch( descriptors_1, descriptors_2, matches, 500 );

//look whether the match is inside a defined area of the image
//only 25% of maximum of possible distance
double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));

vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());  
for (size_t i = 0; i < matches.size(); ++i)
{ 
    for (int j = 0; j < matches[i].size(); j++)
    {
        Point2f from = keypoints_1[matches[i][j].queryIdx].pt;
        Point2f to = keypoints_2[matches[i][j].trainIdx].pt;

        //calculate local distance for each possible match
        double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));

        //save as best match if local distance is in specified area and on same height
        if (dist < tresholdDist && abs(from.y-to.y)<5)
        {
            good_matches2.push_back(matches[i][j]);
            j = matches[i].size();
        }
    }
}

回答by Aurelius

An alternate method of determining high-quality feature matches is the ratio test proposed by David Lowe in his paper on SIFT(page 20 for an explanation). This test rejects poor matches by computing the ratio between the best and second-best match. If the ratio is below some threshold, the match is discarded as being low-quality.

确定高质量特征匹配的另一种方法是 David Lowe 在他关于SIFT 的论文中提出的比率测试(第 20 页的解释)。该测试通过计算最佳和次佳匹配之间的比率来拒绝差的匹配。如果该比率低于某个阈值,则匹配被视为低质量而被丢弃。

std::vector<std::vector<cv::DMatch>> matches;
cv::BFMatcher matcher;
matcher.knnMatch(descriptors_1, descriptors_2, matches, 2);  // Find two nearest matches
vector<cv::DMatch> good_matches;
for (int i = 0; i < matches.size(); ++i)
{
    const float ratio = 0.8; // As in Lowe's paper; can be tuned
    if (matches[i][0].distance < ratio * matches[i][1].distance)
    {
        good_matches.push_back(matches[i][0]);
    }
}

回答by András Kovács

Besides ratio test, you can:

除了比率测试,您还可以:

Only use symmetric matches:

只使用对称匹配:

void symmetryTest(const std::vector<cv::DMatch> &matches1,const std::vector<cv::DMatch> &matches2,std::vector<cv::DMatch>& symMatches)
{
    symMatches.clear();
    for (vector<DMatch>::const_iterator matchIterator1= matches1.begin();matchIterator1!= matches1.end(); ++matchIterator1)
    {
        for (vector<DMatch>::const_iterator matchIterator2= matches2.begin();matchIterator2!= matches2.end();++matchIterator2)
        {
            if ((*matchIterator1).queryIdx ==(*matchIterator2).trainIdx &&(*matchIterator2).queryIdx ==(*matchIterator1).trainIdx)
            {
                symMatches.push_back(DMatch((*matchIterator1).queryIdx,(*matchIterator1).trainIdx,(*matchIterator1).distance));
                break;
            }
        }
    }
}

and since its a stereo image use ransac test:

并且由于它是立体图像,因此使用 ransac 测试:

void ransacTest(const std::vector<cv::DMatch> matches,const std::vector<cv::KeyPoint>&keypoints1,const std::vector<cv::KeyPoint>& keypoints2,std::vector<cv::DMatch>& goodMatches,double distance,double confidence,double minInlierRatio)
{
    goodMatches.clear();
    // Convert keypoints into Point2f
    std::vector<cv::Point2f> points1, points2;
    for (std::vector<cv::DMatch>::const_iterator it= matches.begin();it!= matches.end(); ++it)
    {
        // Get the position of left keypoints
        float x= keypoints1[it->queryIdx].pt.x;
        float y= keypoints1[it->queryIdx].pt.y;
        points1.push_back(cv::Point2f(x,y));
        // Get the position of right keypoints
        x= keypoints2[it->trainIdx].pt.x;
        y= keypoints2[it->trainIdx].pt.y;
        points2.push_back(cv::Point2f(x,y));
    }
    // Compute F matrix using RANSAC
    std::vector<uchar> inliers(points1.size(),0);
    cv::Mat fundemental= cv::findFundamentalMat(cv::Mat(points1),cv::Mat(points2),inliers,CV_FM_RANSAC,distance,confidence); // confidence probability
    // extract the surviving (inliers) matches
    std::vector<uchar>::const_iterator
    itIn= inliers.begin();
    std::vector<cv::DMatch>::const_iterator
    itM= matches.begin();
    // for all matches
    for ( ;itIn!= inliers.end(); ++itIn, ++itM)
    {
        if (*itIn)
        { // it is a valid match
            goodMatches.push_back(*itM);
        }
    }
}