C++ 如何正确使用 cv::triangulatePoints()

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/16295551/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-27 20:14:01  来源:igfitidea点击:

How to correctly use cv::triangulatePoints()

c++opencvtriangulation

提问by Ander Biguri

I am trying to triangulate some points with OpenCV and I found this cv::triangulatePoints()function. The problem is that there is almost no documentation or examples of it.

我试图用 OpenCV 对一些点进行三角测量,我找到了这个cv::triangulatePoints()函数。问题是几乎没有文档或示例。

I have some doubts about it.

我对此有些怀疑。

  1. What method does it use?I've making a small research about triangulations and there are several methods (Linear, Linear LS, eigen, iterative LS, iterative eigen,...) but I can't find which one is it using in OpenCV.

  2. How should I use it?It seems that as an input it needs a projection matrix and 3xNhomogeneous 2Dpoints. I have them defined as std::vector<cv::Point3d> pnts, but as an output it needs 4xNarrays and obviously I can't create a std::vector<cv::Point4d>because it doesn't exist, so how should I define the output vector?

  1. 它使用什么方法?我对三角剖分进行了一项小型研究,有几种方法(线性、线性 LS、本征、迭代 LS、迭代本征等),但我找不到它在 OpenCV 中使用的是哪一种。

  2. 我应该如何使用它?似乎作为输入,它需要一个投影矩阵和3xN 个均匀的2D点。我将它们定义为std::vector<cv::Point3d> pnts,但作为输出,它需要4xN 个数组,显然我无法创建一个,std::vector<cv::Point4d>因为它不存在,那么我应该如何定义输出向量?

For the second question I tried: cv::Mat pnts3D(4,N,CV_64F);and cv::Mat pnts3d;, neither seems to work (it throws an exception).

对于我尝试的第二个问题:cv::Mat pnts3D(4,N,CV_64F);and cv::Mat pnts3d;,似乎都不起作用(它引发异常)。

回答by Ander Biguri

1.- The methodused is Least Squares. There are more complex algorithms than this one. Still it is the most common one, as the other methods may fail in some cases (i.e. some others fails if points are on plane or on infinite).

1.-使用的方法是最小二乘法。还有比这更复杂的算法。它仍然是最常见的方法,因为在某些情况下其他方法可能会失败(即,如果点在平面上或无限上,则其他方法会失败)。

The method can be found in Multiple View Geometry in Computer Visionby Richard Hartley and Andrew Zisserman(p312)

该方法可以发现多视图几何在计算机视觉理查德·哈特利和安德鲁·齐塞尔曼(P312)

2.-The usage:

2.-用法

cv::Mat pnts3D(1,N,CV_64FC4);
cv::Mat cam0pnts(1,N,CV_64FC2);
cv::Mat cam1pnts(1,N,CV_64FC2);

Fill the 2 chanel point Matrices with the points in images.

用图像中的点填充 2 个香奈儿点矩阵。

cam0and cam1are Mat3x4camera matrices (intrinsic and extrinsic parameters). You can construct them by multiplying A*RT, where A is the intrinsic parameter matrix and RT the rotation translation 3x4 pose matrix.

cam0cam1Mat3x4相机矩阵(内在和外在参数)。您可以通过乘以 A*RT 来构建它们,其中 A 是内在参数矩阵,RT 是旋转平移 3x4 姿势矩阵。

cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);


NOTE: pnts3DNEEDs to be a 4 channel 1xNcv::Matwhen defined, throws exception if not, but the result is a cv::Mat(4,N,cv_64FC1)matrix. Really confusing, but it is the only way I didn't got an exception.

注意pnts3D定义时需要是 4 通道1xNcv::Mat,否则抛出异常,但结果是一个cv::Mat(4,N,cv_64FC1)矩阵。真的很混乱,但这是我没有例外的唯一方法。



UPDATE: As of version 3.0 or possibly earlier, this is no longer true, and pnts3Dcan also be of type Mat(4,N,CV_64FC1)or may be left completely empty (as usual, it is created inside the function).

更新:从 3.0 版或更早的版本开始,这不再是真的,pnts3D也可以是类型Mat(4,N,CV_64FC1)或可能完全为空(像往常一样,它是在函数内部创建的)。

回答by Bálint Kriván

A small addition to @Ander Biguri's answer. You should get your image points on a non-undistorted image, and invoke undistortPoints()on the cam0pntsand cam1pnts, because cv::triangulatePointsexpects the 2D points in normalized coordinates (independent from the camera) and cam0and cam1should be only [R|t^T]matricies you do not need to multiple it with A.

对@Ander Biguri 的回答的一个小补充。你应该让你的形象加分的非在undistort编的图像,并调用undistortPoints()cam0pntscam1pnts,因为cv::triangulatePoints预计2D点的标准坐标(独立于相机)和cam0cam1应只[R | T ^ T]matricies你不需要将它与A 相乘

回答by Gines Hidalgo

Thanks to Ander Biguri! His answer helped me a lot. But I always prefer the alternative with std::vector, I edited his solution to this:

感谢安德·比古里!他的回答对我帮助很大。但我总是更喜欢 std::vector 的替代方案,我为此编辑了他的解决方案:

std::vector<cv::Point2d> cam0pnts;
std::vector<cv::Point2d> cam1pnts;
// You fill them, both with the same size...

// You can pick any of the following 2 (your choice)
// cv::Mat pnts3D(1,cam0pnts.size(),CV_64FC4);
cv::Mat pnts3D(4,cam0pnts.size(),CV_64F);

cv::triangulatePoints(cam0,cam1,cam0pnts,cam1pnts,pnts3D);

So you just need to do emplace_back in the points. Main advantage: you do not need to know the size Nbefore start filling them. Unfortunately, there is no cv::Point4f, so pnts3D must be a cv::Mat...

所以你只需要在点上做 emplace_back 。主要优点:N在开始填充之前您不需要知道尺寸。不幸的是,没有 cv::Point4f,所以 pnts3D 必须是一个 cv::Mat...

回答by YuZ

I tried cv::triangulatePoints, but somehow it calculates garbage. I was forced to implement a linear triangulation method manually, which returns a 4x1 matrix for the triangulated 3D point:

我试过 cv::triangulatePoints,但它以某种方式计算垃圾。我被迫手动实现线性三角剖分方法,它为三角剖分的 3D 点返回一个 4x1 矩阵:

Mat triangulate_Linear_LS(Mat mat_P_l, Mat mat_P_r, Mat warped_back_l, Mat warped_back_r)
{
    Mat A(4,3,CV_64FC1), b(4,1,CV_64FC1), X(3,1,CV_64FC1), X_homogeneous(4,1,CV_64FC1), W(1,1,CV_64FC1);
    W.at<double>(0,0) = 1.0;
    A.at<double>(0,0) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(0,0);
    A.at<double>(0,1) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(0,1);
    A.at<double>(0,2) = (warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(0,2);
    A.at<double>(1,0) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,0) - mat_P_l.at<double>(1,0);
    A.at<double>(1,1) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,1) - mat_P_l.at<double>(1,1);
    A.at<double>(1,2) = (warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,2) - mat_P_l.at<double>(1,2);
    A.at<double>(2,0) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(0,0);
    A.at<double>(2,1) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(0,1);
    A.at<double>(2,2) = (warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(0,2);
    A.at<double>(3,0) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,0) - mat_P_r.at<double>(1,0);
    A.at<double>(3,1) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,1) - mat_P_r.at<double>(1,1);
    A.at<double>(3,2) = (warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,2) - mat_P_r.at<double>(1,2);
    b.at<double>(0,0) = -((warped_back_l.at<double>(0,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(0,3));
    b.at<double>(1,0) = -((warped_back_l.at<double>(1,0)/warped_back_l.at<double>(2,0))*mat_P_l.at<double>(2,3) - mat_P_l.at<double>(1,3));
    b.at<double>(2,0) = -((warped_back_r.at<double>(0,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(0,3));
    b.at<double>(3,0) = -((warped_back_r.at<double>(1,0)/warped_back_r.at<double>(2,0))*mat_P_r.at<double>(2,3) - mat_P_r.at<double>(1,3));
    solve(A,b,X,DECOMP_SVD);
    vconcat(X,W,X_homogeneous);
    return X_homogeneous;
}

the input parameters are two 3x4 camera projection matrices and a corresponding left/right pixel pair (x,y,w).

输入参数是两个 3x4 相机投影矩阵和一个对应的左/右像素对 (x,y,w)。

回答by Chris

Alternatively you could use the method from Hartley & Zisserman implemented here: http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/

或者,您可以使用 Hartley & Zisserman 在此处实施的方法:http: //www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/