ios 如何将 CVPixelBuffer 变成 UIImage?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/8072208/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to turn a CVPixelBuffer into a UIImage?
提问by mahboudz
I'm having some problems getting a UIIMage from a CVPixelBuffer. This is what I am trying:
我在从 CVPixelBuffer 获取 UIIMage 时遇到了一些问题。这就是我正在尝试的:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
CFRelease(attachments);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if (width && height) { // test to make sure we have valid dimensions
UIImage *image = [[UIImage alloc] initWithCIImage:ciImage];
UIImageView *lv = [[UIImageView alloc] initWithFrame:self.view.frame];
lv.contentMode = UIViewContentModeScaleAspectFill;
self.lockedView = lv;
[lv release];
self.lockedView.image = image;
[image release];
}
[ciImage release];
height
and width
are both correctly set to the resolution of the camera. image
is created but I it seems to be black (or maybe transparent?). I can't quite understand where the problem is. Any ideas would be appreciated.
height
并且width
都正确设置为相机的分辨率。 image
已创建,但我似乎是黑色的(或者可能是透明的?)。我不太明白问题出在哪里。任何想法,将不胜感激。
回答by Tommy
First of all the obvious stuff that doesn't relate directly to your question: AVCaptureVideoPreviewLayer
is the cheapest way to pipe video from either of the cameras into an independent view if that's where the data is coming from and you've no immediate plans to modify it. You don't have to do any pushing yourself, the preview layer is directly connected to the AVCaptureSession
and updates itself.
首先,与您的问题没有直接关系的显而易见的事情是:AVCaptureVideoPreviewLayer
如果这是数据的来源并且您没有立即修改它的计划,那么是将视频从任一摄像机传输到独立视图的最便宜的方式. 您不必自己做任何推送,预览层直接连接到AVCaptureSession
并自行更新。
I have to admit to lacking confidence about the central question. There's a semantic difference between a CIImage
and the other two types of image — a CIImage
is a recipe for an image and is not necessarily backed by pixels. It can be something like "take the pixels from here, transform like this, apply this filter, transform like this, merge with this other image, apply this filter". The system doesn't know what a CIImage
looks like until you chose to render it. It also doesn't inherently know the appropriate bounds in which to rasterise it.
我不得不承认对中心问题缺乏信心。aCIImage
和其他两种类型的图像之间存在语义差异- aCIImage
是图像的配方,不一定由像素支持。它可以是“从这里取出像素,像这样变换,应用这个过滤器,像这样变换,与另一个图像合并,应用这个过滤器”。系统不知道 a 的CIImage
外观,直到您选择渲染它。它本身也不知道栅格化它的适当边界。
UIImage
purports merely to wrap a CIImage
. It doesn't convert it to pixels. Presumably UIImageView
should achieve that, but if so then I can't seem to find where you'd supply the appropriate output rectangle.
UIImage
声称只是包装一个CIImage
. 它不会将其转换为像素。大概UIImageView
应该实现这一点,但如果是这样,那么我似乎无法找到您提供适当输出矩形的位置。
I've had success just dodging around the issue with:
我已经成功地避开了这个问题:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);
With gives an obvious opportunity to specify the output rectangle. I'm sure there's a route through without using a CGImage
as an intermediary so please don't assume this solution is best practice.
With 提供了一个明显的机会来指定输出矩形。我确信有一条不使用 aCGImage
作为中介的路线,所以请不要认为这个解决方案是最佳实践。
回答by Andrey M.
Try this one in Swift.
在 Swift 中试试这个。
Swift 4.2:
斯威夫特 4.2:
import VideoToolbox
extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer) {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage)
guard let cgImage = cgImage else {
return nil
}
self.init(cgImage: cgImage)
}
}
Swift 5:
斯威夫特 5:
import VideoToolbox
extension UIImage {
public convenience init?(pixelBuffer: CVPixelBuffer) {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
guard let cgImage = cgImage else {
return nil
}
self.init(cgImage: cgImage)
}
}
Note: This only works for RGB pixel buffers, not for grayscale.
注意:这仅适用于 RGB 像素缓冲区,不适用于灰度。
回答by Jonathan Cichon
Another way to get an UIImage. Performs ~10 times faster, at least in my case:
另一种获取 UIImage 的方法。执行速度快约 10 倍,至少在我的情况下:
int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;
unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
int maxY = h;
for(int y = 0; y<maxY; y++) {
for(int x = 0; x<w; x++) {
int offset = bytesPerPixel*((w*y)+x);
data[offset] = buffer[offset]; // R
data[offset+1] = buffer[offset+1]; // G
data[offset+2] = buffer[offset+2]; // B
data[offset+3] = buffer[offset+3]; // A
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
回答by joe
Unless your image data is in some different format that requires swizzle or conversion - i would recommend no incrementing of anything... just smack the data into your context memory area with memcpy as in:
除非您的图像数据采用某种需要 swizzle 或转换的不同格式 - 我建议不要增加任何内容……只需使用 memcpy 将数据放入您的上下文内存区域,如下所示:
//not here... unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
void *ctxData = CGBitmapContextGetData(c);
// MUST READ-WRITE LOCK THE PIXEL BUFFER!!!!
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxData = CVPixelBufferGetBaseAddress(pixelBuffer);
memcpy(ctxData, pxData, 4 * w * h);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
... and so on...
回答by Vlad
The previous methods led me to have CG Raster Data leak. This method of conversion did not leak for me:
以前的方法导致我有 CG Raster Data 泄漏。这种转换方法对我来说没有泄漏:
@autoreleasepool {
CGImageRef cgImage = NULL;
OSStatus res = CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage);
if (res == noErr){
UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp];
}
CGImageRelease(cgImage);
}
static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut)
{
OSStatus err = noErr;
OSType sourcePixelFormat;
size_t width, height, sourceRowBytes;
void *sourceBaseAddr = NULL;
CGBitmapInfo bitmapInfo;
CGColorSpaceRef colorspace = NULL;
CGDataProviderRef provider = NULL;
CGImageRef image = NULL;
sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
else
return -95014; // only uncompressed pixel formats
sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
width = CVPixelBufferGetWidth( pixelBuffer );
height = CVPixelBufferGetHeight( pixelBuffer );
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );
colorspace = CGColorSpaceCreateDeviceRGB();
CVPixelBufferRetain( pixelBuffer );
provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
if ( err && image ) {
CGImageRelease( image );
image = NULL;
}
if ( provider ) CGDataProviderRelease( provider );
if ( colorspace ) CGColorSpaceRelease( colorspace );
*imageOut = image;
return err;
}
static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size)
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel;
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
}
回答by ethamine
A modern solution would be
一个现代的解决方案是
let image = UIImage(ciImage: CIImage(cvPixelBuffer: YOUR_BUFFER))