ios 从 CMSampleBuffer 制作 UIImage
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/15726761/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Make an UIImage from a CMSampleBuffer
提问by mrplants
This is not the same as the countless questions about converting a CMSampleBuffer
to a UIImage
. I'm simply wondering why I can't convert it like this:
这与无数关于将 a 转换CMSampleBuffer
为 a 的问题不同UIImage
。我只是想知道为什么我不能像这样转换它:
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage * imageFromCoreImageLibrary = [CIImage imageWithCVPixelBuffer: pixelBuffer];
UIImage * imageForUI = [UIImage imageWithCIImage: imageFromCoreImageLibrary];
It seems a lot simpler because it works for YCbCr color spaces, as well as RGBA and others. Is there something wrong with that code?
它看起来简单得多,因为它适用于 YCbCr 颜色空间以及 RGBA 和其他颜色空间。那个代码有问题吗?
回答by Alexander Volkov
For JPEG images:
对于 JPEG 图像:
Swift 4:
斯威夫特 4:
let buff: CMSampleBuffer ... // Have you have CMSampleBuffer
if let imageData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: buff, previewPhotoSampleBuffer: nil) {
let image = UIImage(data: imageData) // Here you have UIImage
}
回答by Popigny
With Swift 3 and iOS 10 AVCapturePhotoOutput : Includes :
使用 Swift 3 和 iOS 10 AVCapturePhotoOutput:包括:
import UIKit
import CoreData
import CoreMotion
import AVFoundation
Create an UIView for preview and link it to the Main Class
创建一个用于预览的 UIView 并将其链接到主类
@IBOutlet var preview: UIView!
Create this to setup the camera session (kCVPixelFormatType_32BGRAis important !!) :
创建这个来设置相机会话(kCVPixelFormatType_32BGRA很重要!!):
lazy var cameraSession: AVCaptureSession = {
let s = AVCaptureSession()
s.sessionPreset = AVCaptureSessionPresetHigh
return s
}()
lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let previewl:AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.cameraSession)
previewl.frame = self.preview.bounds
return previewl
}()
func setupCameraSession() {
let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo) as AVCaptureDevice
do {
let deviceInput = try AVCaptureDeviceInput(device: captureDevice)
cameraSession.beginConfiguration()
if (cameraSession.canAddInput(deviceInput) == true) {
cameraSession.addInput(deviceInput)
}
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) : NSNumber(value: **kCVPixelFormatType_32BGRA** as UInt32)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if (cameraSession.canAddOutput(dataOutput) == true) {
cameraSession.addOutput(dataOutput)
}
cameraSession.commitConfiguration()
let queue = DispatchQueue(label: "fr.popigny.videoQueue", attributes: [])
dataOutput.setSampleBufferDelegate(self, queue: queue)
}
catch let error as NSError {
NSLog("\(error), \(error.localizedDescription)")
}
}
In WillAppear :
在 WillAppear 中:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
setupCameraSession()
}
In Didappear :
在 Diappear 中:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
preview.layer.addSublayer(previewLayer)
cameraSession.startRunning()
}
Create a function to capture output :
创建一个函数来捕获输出:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
// Here you collect each frame and process it
let ts:CMTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
self.mycapturedimage = imageFromSampleBuffer(sampleBuffer: sampleBuffer)
}
Here is the code that convert an kCVPixelFormatType_32BGRACMSampleBuffer to an UIImagethe key things is the bitmapInfothat must correspond to 32BGRA32 little with premultfirst and alpha info :
这是将kCVPixelFormatType_32BGRACMSampleBuffer 转换为UIImage的代码,关键是bitmapInfo必须对应于32BGRA32 little,带有 premultfirst 和 alpha 信息:
func imageFromSampleBuffer(sampleBuffer : CMSampleBuffer) -> UIImage
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer!);
let height = CVPixelBufferGetHeight(imageBuffer!);
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
//let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = context?.makeImage();
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Create an image object from the Quartz image
let image = UIImage.init(cgImage: quartzImage!);
return (image);
}
回答by Dipen Panchasara
Use following code to convert image from PixelBuffer Option 1:
使用以下代码从 PixelBuffer Option 1 转换图像:
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef myImage = [context
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer))];
UIImage *uiImage = [UIImage imageWithCGImage:myImage];
Option 2:
选项 2:
int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;
unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w, h));
CGContextRef c = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
int maxY = h;
for(int y = 0; y<maxY; y++) {
for(int x = 0; x<w; x++) {
int offset = bytesPerPixel*((w*y)+x);
data[offset] = buffer[offset]; // R
data[offset+1] = buffer[offset+1]; // G
data[offset+2] = buffer[offset+2]; // B
data[offset+3] = buffer[offset+3]; // A
}
}
}
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
回答by CodeBender
I wrote a simple extension for use with Swift 4.x/3.xto produce a UIImage
from a CMSampleBuffer
.
我编写了一个用于Swift 4.x/3.x的简单扩展,以UIImage
从CMSampleBuffer
.
This also handles scaling and orientation, though you can just accept default values if they work for you.
这也处理缩放和方向,但如果它们适合您,您可以只接受默认值。
import UIKit
import AVFoundation
extension CMSampleBuffer {
func image(orientation: UIImageOrientation = .up,
scale: CGFloat = 1.0) -> UIImage? {
if let buffer = CMSampleBufferGetImageBuffer(self) {
let ciImage = CIImage(cvPixelBuffer: buffer)
return UIImage(ciImage: ciImage,
scale: scale,
orientation: orientation)
}
return nil
}
}
- If it can obtain buffer data from the image, it will proceed, otherwise nil is returned
- Using the buffer, it initializes a
CIImage
- It returns a
UIImage
initialized with theciImage
value, along with thescale
&orientation
values. If none are provided, the defaults ofup
and1.0
are used respectively
- 如果它可以从图像中获取缓冲区数据,则继续进行,否则返回 nil
- 使用缓冲区,它初始化一个
CIImage
- 它返回一个
UIImage
初始化的ciImage
值,以及scale
&orientation
值。如果未提供,则分别使用up
和的默认值1.0
回答by matt
This is going to come up a lot in connection with the iOS 10 AVCapturePhotoOutput class. Suppose the user wants to snap a photo and you call capturePhoto(with:delegate:)
and your settings include a request for a previewimage. This is a splendidly efficient way to get a preview image, but how are you going to display it in your interface? The preview image arrives as a CMSampleBuffer in your implementation of the delegate method:
这将与 iOS 10 AVCapturePhotoOutput 类有关。假设用户想要拍一张照片并且你打电话capturePhoto(with:delegate:)
,你的设置包括对预览图像的请求。这是获取预览图像的非常有效的方法,但是您将如何在您的界面中显示它呢?预览图像以 CMSampleBuffer 的形式出现在您的委托方法实现中:
func capture(_ output: AVCapturePhotoOutput,
didFinishProcessingPhotoSampleBuffer buff: CMSampleBuffer?,
previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
You need to transform a CMSampleBuffer, previewPhotoSampleBuffer
into a UIImage. How are you going to do that? Like this:
您需要将 CMSampleBufferpreviewPhotoSampleBuffer
转换为 UIImage。你打算怎么做?像这样:
if let prev = previewPhotoSampleBuffer {
if let buff = CMSampleBufferGetImageBuffer(prev) {
let cim = CIImage(cvPixelBuffer: buff)
let im = UIImage(ciImage: cim)
// and now you have a UIImage! do something with it ...
}
}
回答by user924
TO ALL: don't use methods like:
TO ALL:不要使用以下方法:
private let context = CIContext()
private func imageFromSampleBuffer2(_ sampleBuffer: CMSampleBuffer) -> UIImage? {
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
let ciImage = CIImage(cvPixelBuffer: imageBuffer)
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return nil }
return UIImage(cgImage: cgImage)
}
they eat much more cpu, more time to convert
他们吃更多的 CPU,更多的时间来转换
use solution from https://stackoverflow.com/a/40193359/7767664
使用https://stackoverflow.com/a/40193359/7767664 中的解决方案
don't forget to set next setting for AVCaptureVideoDataOutput
不要忘记为 AVCaptureVideoDataOutput 设置下一个设置
videoOutput = AVCaptureVideoDataOutput()
videoOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as String) : NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]
//videoOutput.alwaysDiscardsLateVideoFrames = true
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "MyQueue"))
convert method
转换方法
func imageFromSampleBuffer(_ sampleBuffer : CMSampleBuffer) -> UIImage {
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer!);
let height = CVPixelBufferGetHeight(imageBuffer!);
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
//let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = context?.makeImage();
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
// Create an image object from the Quartz image
let image = UIImage.init(cgImage: quartzImage!);
return (image);
}
回答by xiang gao
Swift 5.0
斯威夫特 5.0
if let cvImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let ciimage = CIImage(cvImageBuffer: cvImageBuffer)
let context = CIContext()
if let cgImage = context.createCGImage(ciimage, from: ciimage.extent) {
let uiImage = UIImage(cgImage: cgImage)
}
}
回答by Cruinh
A Swift 4 / iOS 11 version of Popigny's answer:
Popigny 答案的 Swift 4 / iOS 11 版本:
import Foundation
import AVFoundation
import UIKit
class ViewController : UIViewController {
let captureSession = AVCaptureSession()
let photoOutput = AVCapturePhotoOutput()
let cameraPreview = UIView(frame: .zero)
let progressIndicator = ProgressIndicator()
override func viewDidLoad() {
super.viewDidLoad()
setupVideoPreview()
do {
try setupCaptureSession()
} catch {
let errorMessage = String(describing:error)
print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
alert(title: "Error", message: errorMessage)
}
}
private func setupCaptureSession() throws {
let deviceDiscovery = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.back)
let devices = deviceDiscovery.devices
guard let captureDevice = devices.first else {
let errorMessage = "No camera available"
print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
alert(title: "Error", message: errorMessage)
return
}
let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
captureSession.addInput(captureDeviceInput)
captureSession.sessionPreset = AVCaptureSession.Preset.photo
captureSession.startRunning()
if captureSession.canAddOutput(photoOutput) {
captureSession.addOutput(photoOutput)
}
}
private func setupVideoPreview() {
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.bounds = view.bounds
previewLayer.position = CGPoint(x:view.bounds.midX, y:view.bounds.midY)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
cameraPreview.layer.addSublayer(previewLayer)
cameraPreview.addGestureRecognizer(UITapGestureRecognizer(target: self, action:#selector(capturePhoto)))
cameraPreview.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(cameraPreview)
let viewsDict = ["cameraPreview":cameraPreview]
view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:|-0-[cameraPreview]-0-|", options: [], metrics: nil, views: viewsDict))
view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "H:|-0-[cameraPreview]-0-|", options: [], metrics: nil, views: viewsDict))
}
@objc func capturePhoto(_ sender: UITapGestureRecognizer) {
progressIndicator.add(toView: view)
let photoOutputSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg])
photoOutput.capturePhoto(with: photoOutputSettings, delegate: self)
}
func saveToPhotosAlbum(_ image: UIImage) {
UIImageWriteToSavedPhotosAlbum(image, self, #selector(photoWasSavedToAlbum), nil)
}
@objc func photoWasSavedToAlbum(_ image: UIImage, _ error: Error?, _ context: Any?) {
alert(message: "Photo saved to device photo album")
}
func alert(title: String?=nil, message:String?=nil) {
let alert = UIAlertController(title: title, message: message, preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "OK", style: .default, handler: nil))
present(alert, animated:true)
}
}
extension ViewController : AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let photoData = photo.fileDataRepresentation() else {
let errorMessage = "Photo capture did not provide output data"
print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
alert(title: "Error", message: errorMessage)
return
}
guard let image = UIImage(data: photoData) else {
let errorMessage = "could not create image to save"
print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
alert(title: "Error", message: errorMessage)
return
}
saveToPhotosAlbum(image)
progressIndicator.hide()
}
}
A full example project to see this in context: https://github.com/cruinh/CameraCapture
在上下文中查看此完整示例项目:https: //github.com/cruinh/CameraCapture