iOS - 缩放和裁剪 CMSampleBufferRef/CVImageBufferRef
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/8493583/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
iOS - Scale and crop CMSampleBufferRef/CVImageBufferRef
提问by vodkhang
I am using AVFoundation and getting the sample buffer from AVCaptureVideoDataOutput
, I can write it directly to videoWriter by using:
我正在使用 AVFoundation 并从中获取示例缓冲区AVCaptureVideoDataOutput
,我可以使用以下命令将其直接写入 videoWriter:
- (void)writeBufferFrame:(CMSampleBufferRef)sampleBuffer {
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if(self.videoWriter.status != AVAssetWriterStatusWriting)
{
[self.videoWriter startWriting];
[self.videoWriter startSessionAtSourceTime:lastSampleTime];
}
[self.videoWriterInput appendSampleBuffer:sampleBuffer];
}
What I want to do now is to crop and scale the image inside the CMSampleBufferRef without converting it into UIImage or CGImageRef because that slows down the performance.
我现在想要做的是在 CMSampleBufferRef 中裁剪和缩放图像,而不将其转换为 UIImage 或 CGImageRef,因为这会降低性能。
回答by Sten
If you use vimage you can work directly on the buffer data without converting it to any image format.
如果您使用 vimage,您可以直接处理缓冲区数据,而无需将其转换为任何图像格式。
outImg contains the cropped and scaled image data. The relation between outWidth and cropWidth sets the scaling.
outImg 包含裁剪和缩放的图像数据。outWidth 和cropWidth 之间的关系设置缩放。
int cropX0, cropY0, cropHeight, cropWidth, outWidth, outHeight;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
vImage_Buffer inBuff;
inBuff.height = cropHeight;
inBuff.width = cropWidth;
inBuff.rowBytes = bytesPerRow;
int startpos = cropY0*bytesPerRow+4*cropX0;
inBuff.data = baseAddress+startpos;
unsigned char *outImg= (unsigned char*)malloc(4*outWidth*outHeight);
vImage_Buffer outBuff = {outImg, outHeight, outWidth, 4*outWidth};
vImage_Error err = vImageScale_ARGB8888(&inBuff, &outBuff, NULL, 0);
if (err != kvImageNoError) NSLog(@" error %ld", err);
So setting cropX0 = 0 and cropY0 = 0 and cropWidth and cropHeight to the original size means no cropping (using the whole original image). Setting outWidth = cropWidth and outHeight = cropHeight results in no scaling. Note that inBuff.rowBytes should always be the length of the full source buffer, not the cropped length.
因此,将cropX0 = 0 和cropY0 = 0 以及cropWidth 和cropHeight 设置为原始大小意味着不裁剪(使用整个原始图像)。设置 outWidth =cropWidth 和 outHeight =cropHeight 不会导致缩放。请注意, inBuff.rowBytes 应始终是完整源缓冲区的长度,而不是裁剪后的长度。
回答by Cliff
You might consider using CoreImage (5.0+).
您可以考虑使用 CoreImage (5.0+)。
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)
options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNull null], kCIImageColorSpace, nil]];
ciImage = [[ciImage imageByApplyingTransform:myScaleTransform] imageByCroppingToRect:myRect];
回答by yuji
Note: I didn't notice that the original question also requested scaling. But anyways, for those who simply needs to crop CMSampleBuffer, here's the solution.
注意:我没有注意到原始问题也要求缩放。但无论如何,对于那些只需要裁剪 CMSampleBuffer 的人来说,这是解决方案。
The buffer is simply an array of pixels, so you can actually process the buffer directly without using vImage. Code is written in Swift, but I think it's easy to find the Objective-C equivalent.
缓冲区只是一个像素数组,因此您实际上可以在不使用 vImage 的情况下直接处理缓冲区。代码是用 Swift 编写的,但我认为很容易找到 Objective-C 的等价物。
First, make sure your CMSampleBuffer is BGRA format. If not, the preset you use is probably YUV, and ruin the bytes per rows that will later be used.
首先,确保您的 CMSampleBuffer 是 BGRA 格式。如果不是,则您使用的预设可能是 YUV,并且会破坏稍后将使用的每行字节数。
dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [
String(kCVPixelBufferPixelFormatTypeKey):
NSNumber(value: kCVPixelFormatType_32BGRA)
]
Then, when you get the sample buffer:
然后,当您获得样本缓冲区时:
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(imageBuffer, .readOnly)
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let cropWidth = 640
let cropHeight = 640
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: baseAddress, width: cropWidth, height: cropHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
// now the cropped image is inside the context.
// you can convert it back to CVPixelBuffer
// using CVPixelBufferCreateWithBytes if you want.
CVPixelBufferUnlockBaseAddress(imageBuffer, .readOnly)
// create image
let cgImage: CGImage = context!.makeImage()!
let image = UIImage(cgImage: cgImage)
If you want to crop from some specific position, add the following code:
如果要从某个特定位置裁剪,请添加以下代码:
// calculate start position
let bytesPerPixel = 4
let startPoint = [ "x": 10, "y": 10 ]
let startAddress = baseAddress + startPoint["y"]! * bytesPerRow + startPoint["x"]! * bytesPerPixel
and change the baseAddress
in CGContext()
into startAddress
. Make sure not to exceed the origin image width and height.
并将baseAddress
in更改CGContext()
为startAddress
. 确保不要超过原始图像的宽度和高度。
回答by Steve McFarlin
For scaling you can have AVFoundation do this for you. See my recent post here. Setting the value for AVVideoWidth/AVVideoHeight key will scale the images if they are not the same dimensions. Take a look at the properties here.As for cropping I am not sure if you can have AVFoundation do this for you. You may have to resort to using OpenGL or CoreImage. There are a couple of good links in the top post for this SO question.
对于扩展,您可以让 AVFoundation 为您执行此操作。在这里查看我最近的帖子。如果图像的尺寸不同,则设置 AVVideoWidth/AVVideoHeight 键的值将缩放图像。看看这里的属性。至于裁剪,我不确定您是否可以让 AVFoundation 为您执行此操作。您可能不得不求助于使用 OpenGL 或 CoreImage。在这个SO 问题的顶帖中有几个很好的链接。
回答by wu qiuhao
Try this on Swift3
在 Swift3 上试试这个
func resize(_ destSize: CGSize)-> CVPixelBuffer? {
guard let imageBuffer = CMSampleBufferGetImageBuffer(self) else { return nil }
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
// Get information about the image
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CGFloat(CVPixelBufferGetBytesPerRow(imageBuffer))
let height = CGFloat(CVPixelBufferGetHeight(imageBuffer))
let width = CGFloat(CVPixelBufferGetWidth(imageBuffer))
var pixelBuffer: CVPixelBuffer?
let options = [kCVPixelBufferCGImageCompatibilityKey:true,
kCVPixelBufferCGBitmapContextCompatibilityKey:true]
let topMargin = (height - destSize.height) / CGFloat(2)
let leftMargin = (width - destSize.width) * CGFloat(2)
let baseAddressStart = Int(bytesPerRow * topMargin + leftMargin)
let addressPoint = baseAddress!.assumingMemoryBound(to: UInt8.self)
let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(destSize.width), Int(destSize.height), kCVPixelFormatType_32BGRA, &addressPoint[baseAddressStart], Int(bytesPerRow), nil, nil, options as CFDictionary, &pixelBuffer)
if (status != 0) {
print(status)
return nil;
}
CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer;
}