Python/OpenCV:从立体图像计算深度图
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/27726306/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Python/OpenCV: Computing a depth map from stereo images
提问by jwdink
I have two stereo images that I'd like to use to compute a depth map. While I unfortunately do not know C/C++, I do know python-- so when I found this tutorial, I was optimistic.
我有两个立体图像,我想用它们来计算深度图。虽然不幸的是我不知道 C/C++,但我知道 python——所以当我找到这个教程时,我很乐观。
Unfortunately, the tutorial appears to be somewhat out of date. It not only needs to be tweaked to run at all (renaming 'createStereoBM' to 'StereoBM') but when it does run, it doesn't give a good result, even on the example stereo-images that were used in the tutorial itself.
不幸的是,该教程似乎有些过时了。它不仅需要调整以运行(将“createStereoBM”重命名为“StereoBM”),而且当它运行时,它不会给出好的结果,即使是在教程中使用的示例立体图像上.
Here's an example:
下面是一个例子:




import numpy as np
import cv2
from matplotlib import pyplot as plt
imgL = cv2.imread('Yeuna9x.png',0)
imgR = cv2.imread('SuXT483.png',0)
stereo = cv2.StereoBM(1, 16, 15)
disparity = stereo.compute(imgL, imgR)
plt.imshow(disparity,'gray')
plt.show()
The result:
结果:


This looks very different from what the author of the tutorial achieves:
这看起来与教程作者实现的非常不同:

(source: opencv.org)

(来源:opencv.org)
Tweaking the parameters does not improve matters. All documentation I've been able to find is for the original C-version of openCV code, not the python-library-equivalent. I unfortunately haven't been able to use this to improve things.
调整参数并不能改善问题。我能找到的所有文档都是针对 openCV 代码的原始 C 版本,而不是 python-library-equivalent。不幸的是,我无法使用它来改进事物。
Any help would be appreciated!
任何帮助,将不胜感激!
回答by haruka
The camera is translated verticallyinstead of horizontally. Rotate the images 90 degrees, then try. (Prove it to yourself by rotating the screen. I just picked up my laptop and turned it on its edge.)
相机是垂直平移而不是水平平移。将图像旋转 90 度,然后尝试。(通过旋转屏幕向自己证明这一点。我刚刚拿起我的笔记本电脑并将其翻转过来。)
You mention different software; perhaps a row-major/column-major kind of thing between the original and pyOpenCV.
你提到了不同的软件;可能是原始文件和 pyOpenCV 之间的行优先/列优先类型。
回答by samkhan13
It is possible that you need to keep adjusting the parameters of the block matching algorithm.
您可能需要不断调整块匹配算法的参数。
have a look at this blog article:https://erget.wordpress.com/2014/03/13/building-an-interactive-gui-with-opencv/
看看这篇博客文章:https: //erget.wordpress.com/2014/03/13/building-an-interactive-gui-with-opencv/
The article's author has composed a set of classes to make the process of calibrating the cameras more streamlined than the opencv tutorial. These classes are available as pypi package: https://github.com/erget/StereoVision
文章作者编写了一组类,使校准相机的过程比 opencv 教程更加精简。这些类可用作 pypi 包:https: //github.com/erget/StereoVision
Hope this helps :)
希望这可以帮助 :)
回答by will
You have the images the wrong way around.
你有错误的图像。
Look at the images, the tin behind the lamp lets you work out the camera locations of the two images,
看图像,灯后面的锡可以让你计算出两个图像的相机位置,
Just change this:
只需改变这个:
# v
imgR = cv2.imread('Yeuna9x.png',0)
imgL = cv2.imread('SuXT483.png',0)
# ^
If you look at the image in the tutorial which they say is the leftframe, it the same as your rightone.
如果您查看教程中的图像,他们说是left框架,它与您的图像相同right。
Here's my result after the change.
这是我更改后的结果。



