Python Opencv - 灰度模式与灰度颜色转换

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/37203970/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 19:02:09  来源:igfitidea点击:

Opencv - Grayscale mode Vs gray color conversion

pythonpython-2.7opencv

提问by Gagandeep Singh

I am working in opencv(2.4.11) python(2.7) and was playing around with gray images. I found an unusual behavior when loading image in gray scale mode and converting image from BGR to GRAY. Following is my experimental code:

我在 opencv(2.4.11) python(2.7) 工作,并在玩灰色图像。在灰度模式下加载图像并将图像从 BGR 转换为灰色时,我发现了一个不寻常的行为。以下是我的实验代码:

import cv2

path = 'some/path/to/color/image.jpg'

# Load color image (BGR) and convert to gray
img = cv2.imread(path)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# Load in grayscale mode
img_gray_mode = cv2.imread(path, 0)

# diff = img_gray_mode - img_gray
diff = cv2.bitwise_xor(img_gray,img_gray_mode)

cv2.imshow('diff', diff)
cv2.waitKey()

When I viewed the difference image, I can see the left out pixels instead of jet black image. Can you suggest any reason? What is the correct way of working with gray images.

P.S. When I use both the images in SIFT, keypoints are different which may lead to different outcome specially when working with bad quality images.

当我查看差异图像时,我可以看到遗漏的像素而不是黑色图像。你能提出任何理由吗?处理灰度图像的正确方法是什么。

PS 当我在 SIFT 中使用这两个图像时,关键点是不同的,这可能会导致不同的结果,特别是在处理质量差的图像时。

回答by bakkal

Note: This is not a duplicate, because the OP is aware that the image from cv2.imreadis in BGR format (unlike the suggested duplicate question that assumed it was RGB hence the provided answers only address that issue)

注意:这不是重复的,因为 OP 知道来自cv2.imreadBGR 格式的图像(与假设它是 RGB 的建议重复问题不同,因此提供的答案仅解决该问题)

To illustrate, I've opened up this same color JPEG image:

为了说明这一点,我打开了这张相同颜色的 JPEG 图像:

enter image description here

在此处输入图片说明

once using the conversion

一旦使用转换

img = cv2.imread(path)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

and another by loading it in gray scale mode

另一个通过以灰度模式加载

img_gray_mode = cv2.imread(path, cv2.IMREAD_GRAYSCALE)

Like you've documented, the diff between the two images is not perfectly 0, I can see diff pixels in towards the left and the bottom

就像您记录的那样,两个图像之间的差异不是完美的 0,我可以看到左侧和底部的差异像素

enter image description here

在此处输入图片说明

I've summed up the diff too to see

我也总结了差异以查看

import numpy as np
np.sum(diff)
# I got 6143, on a 494 x 750 image

I tried all cv2.imread()modes

我尝试了所有cv2.imread()模式

Among all the IMREAD_modes for cv2.imread(), only IMREAD_COLORand IMREAD_ANYCOLORcan be converted using COLOR_BGR2GRAY, and both of them gave me the same diff against the image opened in IMREAD_GRAYSCALE

在 的所有IMREAD_模式中cv2.imread(),只有IMREAD_COLORIMREAD_ANYCOLOR可以使用 进行转换COLOR_BGR2GRAY,并且它们都给了我与在 中打开的图像相同的差异IMREAD_GRAYSCALE

The difference doesn't seem that big. My guess is comes from the differences in the numeric calculations in the two methods (loading grayscale vs conversion to grayscale)

差别好像没那么大。我的猜测是来自两种方法中数值计算的差异(加载灰度与转换为灰度)

Naturally what you want to avoid is fine tuning your code on a particular version of the image just to find out it was suboptimal for images coming from a different source.

自然地,您想要避免的是在特定版本的图像上微调您的代码,只是为了发现它对于来自不同来源的图像来说是次优的。

In brief, let's not mix the versions and types in the processing pipeline.

简而言之,我们不要在处理管道中混合版本和类型。

So I'd keep the image sources homogenous, e.g. if you have capturing the image from a video camera in BGR, then I'd use BGR as the source, and do the BGR to grayscale conversion cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

所以我会保持图像源的同质化,例如,如果您从 BGR 中的摄像机捕获图像,那么我会使用 BGR 作为源,并进行 BGR 到灰度转换 cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

Vice versa if my ultimate source is grayscale then I'd open the files and the video capture in gray scale cv2.imread(path, cv2.IMREAD_GRAYSCALE)

反之亦然,如果我的最终来源是灰度,那么我会以灰度打开文件和视频捕获 cv2.imread(path, cv2.IMREAD_GRAYSCALE)