C++ 从基本矩阵中提取平移和旋转
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/14150152/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Extract Translation and Rotation from Fundamental Matrix
提问by Teris
I am trying to retrieve translation and rotation vectors from a computed fundamental Matrix. I do use OpenCV and the general approach is from wikipedia. My Code is like this:
我正在尝试从计算的基本矩阵中检索平移和旋转向量。我确实使用 OpenCV,一般方法来自维基百科。我的代码是这样的:
//Compute Essential Matrix
Mat A = cameraMatrix(); //Computed using chessboard
Mat F = fundamentalMatrix(); //Computed using matching keypoints
Mat E = A.t() * F * A;
//Perfrom SVD on E
SVD decomp = SVD(E);
//U
Mat U = decomp.u;
//S
Mat S(3, 3, CV_64F, Scalar(0));
S.at<double>(0, 0) = decomp.w.at<double>(0, 0);
S.at<double>(1, 1) = decomp.w.at<double>(0, 1);
S.at<double>(2, 2) = decomp.w.at<double>(0, 2);
//V
Mat V = decomp.vt; //Needs to be decomp.vt.t(); (transpose once more)
//W
Mat W(3, 3, CV_64F, Scalar(0));
W.at<double>(0, 1) = -1;
W.at<double>(1, 0) = 1;
W.at<double>(2, 2) = 1;
cout << "computed rotation: " << endl;
cout << U * W.t() * V.t() << endl;
cout << "real rotation:" << endl;
Mat rot;
Rodrigues(images[1].rvec - images[0].rvec, rot); //Difference between known rotations
cout << rot << endl;
At the end I try to compare the estimated rotation to the one I computed using the chessboard which is in every Image (I plan to get the extrinsic parameters without the chessboard). For example I get this:
最后,我尝试将估计的旋转与我使用每个图像中的棋盘计算的旋转进行比较(我计划在没有棋盘的情况下获得外部参数)。例如我得到这个:
computed rotation:
[0.8543027125286542, -0.382437675069228, 0.352006107978011;
0.3969758209413922, 0.9172325022900715, 0.03308676972148356;
0.3355250705298953, -0.1114717965690797, -0.9354127247453767]
real rotation:
[0.9998572365450219, 0.01122579241510944, 0.01262886032882241;
-0.0114034800333517, 0.9998357441946927, 0.01408706050863871;
-0.01246864754818991, -0.01422906234781374, 0.9998210172891051]
So clearly there seems to be a problem, I just can't figure out what it could be.
很明显似乎有问题,我只是无法弄清楚它可能是什么。
EDIT: Here are the results I got with the untransposed vt(obviously from another scene):
编辑:这是我使用未转置的 vt 得到的结果(显然来自另一个场景):
computed rotation:
[0.8720599858028177, -0.1867080200550876, 0.4523842353671251;
0.141182538980452, 0.9810442195058469, 0.1327393312518831;
-0.4685924368239661, -0.05188790438313154, 0.8818893204535954]
real rotation
[0.8670861432556456, -0.427294988334106, 0.2560871201732064;
0.4024551137989086, 0.9038194629873437, 0.1453969040329854;
-0.2935838918455123, -0.02300806966752995, 0.9556563855167906]
Here is my computed camera matrix, the error was pretty low(about 0.17...).
这是我计算的相机矩阵,误差非常低(大约 0.17 ...)。
[1699.001342509651, 0, 834.2587265398068;
0, 1696.645251354618, 607.1292618175946;
0, 0, 1]
Here are the results I get when trying to reproject a cube... Camera 0, the cube is axis-aligned, rotation and translation are (0, 0, 0). image http://imageshack.us/a/img802/5292/bildschirmfoto20130110u.png
以下是我尝试重新投影立方体时得到的结果...相机 0,立方体轴对齐,旋转和平移为 (0, 0, 0)。 图片 http://imageshack.us/a/img802/5292/bildschirmfoto20130110u.png
and the other one, with the epilines of the points in the first image. image http://imageshack.us/a/img546/189/bildschirmfoto20130110uy.png
另一个是第一张图像中点的外线。 图片 http://imageshack.us/a/img546/189/bildschirmfoto20130110uy.png
采纳答案by user1993497
Please take a look at this link:
请看一下这个链接:
http://isit.u-clermont1.fr/~ab/Classes/DIKU-3DCV2/Handouts/Lecture16.pdf.
http://isit.u-clermont1.fr/~ab/Classes/DIKU-3DCV2/Handouts/Lecture16.pdf。
Refer to Page 2. There are two possibilities for R. The first is UWVT and the second is UWTVT. You used the second. Try the first.
请参阅第 2 页。 R 有两种可能。第一种是 U WVT,第二种是 U WTVT。你用了第二个。尝试第一个。
回答by MichalSzczep
The 8-point algorithm is the simplest method of computing fundamental matrix, but if care is taken you can perform it well. The key to obtain the good results is proper careful normalization of the input data before constructing the equations to solve. Many of algorithms can do it. Pixels point coordinate must be changed to camera coordinates, you do it in this line:
8点算法是计算基本矩阵的最简单的方法,但如果小心,你可以很好地执行它。获得良好结果的关键是在构建要求解的方程之前对输入数据进行适当仔细的归一化。许多算法都可以做到。像素点坐标必须更改为相机坐标,您可以在此行中进行:
Mat E = A.t() * F * A;
Mat E = A.t() * F * A;
However this assumption is not accurate. If camera calibration matrix K is known, then you may apply inverse to the point x to obtain the point expressed in normalized coordinates.
然而,这种假设并不准确。如果相机校准矩阵 K 已知,那么您可以对点 x 应用逆以获得以标准化坐标表示的点。
X_{norm}= K.inv()*X_{pix}
where X_{pix}(2)
, z is equal 1.
X_{norm}= K.inv()*X_{pix}
其中X_{pix}(2)
,z 等于 1。
In the case of the 8PA, a simple transformation of points improve and hence in the stability of the results. The suggested normalization is a translation and scaling of each image so that the centroid of the reference points is at origin of the coordinates and the RMS distance of the points from the origin is equal to \sqrt{2}
. Note that it is recommended that the singularity condition should be enforced before denormalization.
在 8PA 的情况下,点的简单转换会改进,从而提高结果的稳定性。建议的归一化是每个图像的平移和缩放,以便参考点的质心位于坐标原点,并且点与原点的 RMS 距离等于\sqrt{2}
。请注意,建议在非规范化之前强制执行奇异条件。
Reference: check it if : you are still interested
参考:检查它是否: 您仍然感兴趣