模块'对象没有属性'drawMatches' opencv python

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/20259025/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 20:02:24  来源:igfitidea点击:

module' object has no attribute 'drawMatches' opencv python

pythonimageopencvimage-processingcomputer-vision

提问by Javed

I am just doing an example of feature detection in OpenCV. This example is shown below. It is giving me the following error

我只是在 OpenCV 中做一个特征检测的例子。这个例子如下所示。它给了我以下错误

module' object has no attribute 'drawMatches'

模块'对象没有属性'drawMatches'

I have checked the OpenCV Docs and am not sure why I'm getting this error. Does anyone know why?

我已经检查了 OpenCV 文档,但不确定为什么会出现此错误。有谁知道为什么?

import numpy as np
import cv2
import matplotlib.pyplot as plt

img1 = cv2.imread('box.png',0)          # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage

# Initiate SIFT detector
orb = cv2.ORB()

# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors.
matches = bf.match(des1,des2)

# Draw first 10 matches.
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)

plt.imshow(img3),plt.show()

Error:

错误:

Traceback (most recent call last):
File "match.py", line 22, in <module>
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2)
AttributeError: 'module' object has no attribute 'drawMatches'

采纳答案by Mailerdaimon

The drawMatchesFunction is not part of the Python interface.
As you can see in the docs, it is only defined for C++at the moment.

drawMatches函数不是 Python 接口的一部分。
正如您在docs 中看到的那样,它目前仅被定义为C++

Excerpt from the docs:

摘自文档:

 C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<DMatch>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<char>& matchesMask=vector<char>(), int flags=DrawMatchesFlags::DEFAULT )
 C++: void drawMatches(const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch>>& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char>>& matchesMask=vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )

If the function had a Python interface, you would find something like this:

如果该函数有一个 Python 接口,您会发现如下内容:

 Python: cv2.drawMatches(img1, keypoints1, [...]) 

EDIT

编辑

There actually was a committhat introduced this function 5 months ago. However, it is not (yet) in the official documentation.
Make sure you are using the newest OpenCV Version (2.4.7). For sake of completeness the Functions interface for OpenCV 3.0.0 will looks like this:

实际上在5 个月前有一个提交引入了这个功能。但是,它(还)不在官方文档中。
确保您使用的是最新的 OpenCV 版本 (2.4.7)。为了完整起见,OpenCV 3.0.0 的函数接口看起来像这样

cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg

回答by PhilWilliammee

I know this question has an accepted answer that is correct, but if you are using OpenCV 2.4.8 and not 3.0(-dev), a workaround could be to use some functions from the included samples found in opencv\sources\samples\python2\find_obj

我知道这个问题有一个公认的正确答案,但是如果您使用的是 OpenCV 2.4.8 而不是 3.0(-dev),解决方法可能是使用包含在 opencv\sources\samples\python2\find_obj

import cv2
from find_obj import filter_matches,explore_match

img1 = cv2.imread('../c/box.png',0)          # queryImage
img2 = cv2.imread('../c/box_in_scene.png',0) # trainImage

# Initiate SIFT detector
orb = cv2.ORB()

# find the keypoints and descriptors with SIFT
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING)#, crossCheck=True)

matches = bf.knnMatch(des1, trainDescriptors = des2, k = 2)
p1, p2, kp_pairs = filter_matches(kp1, kp2, matches)
explore_match('find_obj', img1,img2,kp_pairs)#cv2 shows image

cv2.waitKey()
cv2.destroyAllWindows()

This is the output image:

这是输出图像:

enter image description here

在此处输入图片说明

回答by rayryeng

I am late to the party as well, but I installed OpenCV 2.4.9 for Mac OS X, and the drawMatchesfunction doesn't exist in my distribution. I've also tried the second approach with find_objand that didn't work for me either. With that, I decided to write my own implementation of it that mimics drawMatchesto the best of my ability and this is what I've produced.

我也迟到了,但是我为 Mac OS X 安装了 OpenCV 2.4.9,并且drawMatches我的发行版中不存在该功能。我也尝试过第二种方法,find_obj但这对我也不起作用。有了这个,我决定编写自己的实现,drawMatches尽我所能模仿它,这就是我制作的。

I've provided my own images where one is of a camera man, and the other one is the same image but rotated by 55 degrees counterclockwise.

我提供了我自己的图像,其中一张是摄影师,另一张是相同的图像,但逆时针旋转了 55 度。

The basics of what I wrote is that I allocate an output RGB image where the amount of rows is the maximum of the two images to accommodate for placing both of the images in the output image and the columns are simply the summation of both the columns together. Be advised that I assume that both images are grayscale.

我写的基础知识是我分配了一个输出 RGB 图像,其中行数是两个图像中的最大值,以适应将两个图像放置在输出图像中,而列只是两列的总和. 请注意,我假设两个图像都是灰度的。

I place each image in their corresponding spots, then run through a loop of all of the matched keypoints. I extract which keypoints matched between the two images, then extract their (x,y)coordinates. I draw circles at each of the detected locations, then draw a line connecting these circles together.

我将每个图像放在相应的位置,然后遍历所有匹配的关键点。我提取两个图像之间匹配的关键点,然后提取它们的(x,y)坐标。我在每个检测到的位置画圆,然后画一条线将这些圆连接在一起。

Bear in mind that the detected keypoint in the second image is with respect to its own coordinate system. If you want to place this in the final output image, you need to offset the column coordinate by the amount of columns from the first image so that the column coordinate is with respect to the coordinate system of the output image.

请记住,第二张图像中检测到的关键点与其自身的坐标系有关。如果要将其放置在最终输出图像中,则需要将列坐标偏移第一个图像的列数,以便列坐标相对于输出图像的坐标系。

Without further ado:

无需再费周折:

import numpy as np
import cv2

def drawMatches(img1, kp1, img2, kp2, matches):
    """
    My own implementation of cv2.drawMatches as OpenCV 2.4.9
    does not have this function available but it's supported in
    OpenCV 3.0.0

    This function takes in two images with their associated 
    keypoints, as well as a list of DMatch data structure (matches) 
    that contains which keypoints matched in which images.

    An image will be produced where a montage is shown with
    the first image followed by the second image beside it.

    Keypoints are delineated with circles, while lines are connected
    between matching keypoints.

    img1,img2 - Grayscale images
    kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint 
              detection algorithms
    matches - A list of matches of corresponding keypoints through any
              OpenCV keypoint matching algorithm
    """

    # Create a new output image that concatenates the two images together
    # (a.k.a) a montage
    rows1 = img1.shape[0]
    cols1 = img1.shape[1]
    rows2 = img2.shape[0]
    cols2 = img2.shape[1]

    # Create the output image
    # The rows of the output are the largest between the two images
    # and the columns are simply the sum of the two together
    # The intent is to make this a colour image, so make this 3 channels
    out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')

    # Place the first image to the left
    out[:rows1,:cols1] = np.dstack([img1, img1, img1])

    # Place the next image to the right of it
    out[:rows2,cols1:] = np.dstack([img2, img2, img2])

    # For each pair of points we have between both images
    # draw circles, then connect a line between them
    for mat in matches:

        # Get the matching keypoints for each of the images
        img1_idx = mat.queryIdx
        img2_idx = mat.trainIdx

        # x - columns
        # y - rows
        (x1,y1) = kp1[img1_idx].pt
        (x2,y2) = kp2[img2_idx].pt

        # Draw a small circle at both co-ordinates
        # radius 4
        # colour blue
        # thickness = 1
        cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)   
        cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)

        # Draw a line in between the two points
        # thickness = 1
        # colour blue
        cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255,0,0), 1)


    # Show the image
    cv2.imshow('Matched Features', out)
    cv2.waitKey(0)
    cv2.destroyWindow('Matched Features')

    # Also return the image if you'd like a copy
    return out


To illustrate that this works, here are the two images that I used:

为了说明这是有效的,这里是我使用的两个图像:

Cameraman Image

摄影师形象

Rotated Cameraman Image

旋转的摄影师图像

I used OpenCV's ORB detector to detect the keypoints, and used the normalized Hamming distance as the distance measure for similarity as this is a binary descriptor. As such:

我使用 OpenCV 的 ORB 检测器来检测关键点,并使用归一化的汉明距离作为相似性的距离度量,因为这是一个二进制描述符。像这样:

import numpy as np
import cv2

img1 = cv2.imread('cameraman.png', 0) # Original image - ensure grayscale
img2 = cv2.imread('cameraman_rot55.png', 0) # Rotated image - ensure grayscale

# Create ORB detector with 1000 keypoints with a scaling pyramid factor
# of 1.2
orb = cv2.ORB(1000, 1.2)

# Detect keypoints of original image
(kp1,des1) = orb.detectAndCompute(img1, None)

# Detect keypoints of rotated image
(kp2,des2) = orb.detectAndCompute(img2, None)

# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Do matching
matches = bf.match(des1,des2)

# Sort the matches based on distance.  Least distance
# is better
matches = sorted(matches, key=lambda val: val.distance)

# Show only the top 10 matches - also save a copy for use later
out = drawMatches(img1, kp1, img2, kp2, matches[:10])


This is the image I get:

这是我得到的图像:

Matched Features

匹配功能



To use with knnMatchfrom cv2.BFMatcher

knnMatch从一起使用cv2.BFMatcher

I'd like to make a note where the above code only works if you assume that the matches appear in a 1D list. However, if you decide to use the knnMatchmethod from cv2.BFMatcherfor example, what is returned is a list of lists. Specifically, given the descriptors in img1called des1and the descriptors in img2called des2, each element in the list returned from knnMatchis another list of kmatches from des2which are the closest to each descriptor in des1. Therefore, the first element from the output of knnMatchis a list of kmatches from des2which were the closest to the first descriptor found in des1. The second element from the output of knnMatchis a list of kmatches from des2which were the closest to the second descriptor found in des1and so on.

我想说明上面的代码仅在您假设匹配项出现在 1D 列表中时才有效。但是,如果您决定使用例如knnMatchfrom的方法cv2.BFMatcher,则返回的是列表列表。具体地,给出的描述符img1称为des1并且在描述符img2称为des2,在从返回的列表中的每个元素knnMatch是的另一个列表k匹配从des2它们是最接近每个描述符des1。因此, 输出中的第一个元素knnMatch是一个k匹配列表,des2其中最接近在 中找到的第一个描述符des1。输出的第二个元素knnMatchk匹配的列表des2哪个最接近于中找到的第二个描述符des1,依此类推。

To make the most sense of knnMatch, you mustlimit the total amount of neighbours to match to k=2. The reason why is because you want to use at least two matched points to verify the quality of the match and if the quality is good enough, you'll want to use these to draw your matches and show them on the screen. You can use a very simple ratio test (credit goes to David Lowe) to ensure that the distance from the first matched point from des2to the descriptor in des1is some distance away in comparison to the second matched point from des2. Therefore, to turn what is returned from knnMatchto what is required with the code I wrote above, iterate through the matches, use the above ratio test and check if it passes. If it does, add the first matched keypoint to a new list.

为了充分利用knnMatch,您必须限制要匹配的邻居总数k=2。原因是因为您想使用至少两个匹配点来验证匹配的质量,如果质量足够好,您将需要使用这些来绘制匹配并将它们显示在屏幕上。您可以使用一个非常简单的比率测试(归功于David Lowe)来确保从第一个匹配点 fromdes2到描述符 indes1的距离与第二个匹配点 from 相比有一定的距离des2。因此,将返回的内容从knnMatch对于我上面编写的代码所需的内容,遍历匹配项,使用上面的比率测试并检查它是否通过。如果是,则将第一个匹配的关键点添加到新列表中。

Assuming that you created all of the variables like you did before declaring the BFMatcherinstance, you'd now do this to adapt the knnMatchmethod for using drawMatches:

假设您像在声明BFMatcher实例之前一样创建了所有变量,现在您可以这样做以适应knnMatchusing的方法drawMatches

# Create matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Perform KNN matching
matches = bf.knnMatch(des1, des2, k=2)

# Apply ratio test
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
       # Add first matched keypoint to list
       # if ratio test passes
       good.append(m)

# Or do a list comprehension
#good = [m for (m,n) in matches if m.distance < 0.75*n.distance]

# Now perform drawMatches
out = drawMatches(img1, kp1, img2, kp2, good)

I want to attribute the above modifications to user @ryanmeaseland the answer that these modifications were found is in his post: OpenCV Python : No drawMatchesknn function.

我想将上述修改归因于用户@ryanmeasel,并且在他的帖子中找到了这些修改的答案:OpenCV Python : No drawMatchesknn function