ios 使用 AVFramework 捕获图像

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/8924299/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-30 16:24:12  来源:igfitidea点击:

ios capturing image using AVFramework

iphoneobjective-ciosavfoundation

提问by Oleg

I'm capturing images using this code

我正在使用此代码捕获图像

#pragma mark - image capture

// Create and configure a capture session and start it running
- (void)setupCaptureSession 
{
    NSError *error = nil;

    // Create the session
    AVCaptureSession *session = [[AVCaptureSession alloc] init];

    // Configure the session to produce lower resolution video frames, if your 
    // processing algorithm can cope. We'll specify medium quality for the
    // chosen device.
    session.sessionPreset = AVCaptureSessionPresetMedium;

    // Find a suitable AVCaptureDevice
    AVCaptureDevice *device = [AVCaptureDevice
                           defaultDeviceWithMediaType:AVMediaTypeVideo];

    // Create a device input with the device and add it to the session.
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device 
                                                                    error:&error];
    if (!input)
    {
        NSLog(@"PANIC: no media input");
    }
    [session addInput:input];

    // Create a VideoDataOutput and add it to the session
    AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
    [session addOutput:output];

    // Configure your output.
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    // Specify the pixel format
    output.videoSettings = 
    [NSDictionary dictionaryWithObject:
    [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 
                            forKey:(id)kCVPixelBufferPixelFormatTypeKey];


    // If you wish to cap the frame rate to a known value, such as 15 fps, set 
    // minFrameDuration.

    // Start the session running to start the flow of data
    [session startRunning];

    // Assign session to an ivar.
    [self setSession:session];
}




// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
   fromConnection:(AVCaptureConnection *)connection
{ 
    NSLog(@"captureOutput: didOutputSampleBufferFromConnection");

    // Create a UIImage from the sample buffer data
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];

    //< Add your code here that uses the image >
    [self.imageView setImage:image];
    [self.view setNeedsDisplay];
}


// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    NSLog(@"imageFromSampleBuffer: called");
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
                                             bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);


    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

-(void)setSession:(AVCaptureSession *)session
{
    NSLog(@"setting session...");
    self.captureSession=session;
}

Capturing code works. But! I need to change to things: - video stream from the camera in my view. - getting images every (for example 5 seconds) from it. Help me please, how can it be done?

捕获代码有效。但!我需要改变一些事情: - 在我看来,来自相机的视频流。- 每隔(例如 5 秒)从中获取图像。请帮帮我,怎么办?

采纳答案by Ilanchezhian

Add the following line

添加以下行

output.minFrameDuration = CMTimeMake(5, 1);

below the comment

在评论下方

 // If you wish to cap the frame rate to a known value, such as 15 fps, set
 // minFrameDuration.

but above the

但高于

[session startRunning];

Edit

编辑

Use the following code to preview the camera output.

使用以下代码预览相机输出。

AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
UIView *aView = self.view;
CGRect videoRect = CGRectMake(0.0, 0.0, 320.0, 150.0);
previewLayer.frame = videoRect; // Assume you want the preview layer to fill the view.
[aView.layer addSublayer:previewLayer];

Edit 2:Ok fine..

编辑2:好的..

Apple has provided a way to set the minFrameDuration here

苹果提供了一个方法来设置minFrameDuration这里

So now, use the following code to set the frame duration

所以现在,使用以下代码来设置帧持续时间

AVCaptureConnection *conn = [output connectionWithMediaType:AVMediaTypeVideo];

if (conn.supportsVideoMinFrameDuration)
    conn.videoMinFrameDuration = CMTimeMake(5,1);
if (conn.supportsVideoMaxFrameDuration)
    conn.videoMaxFrameDuration = CMTimeMake(5,1);

回答by Eugene Dudnyk

Be careful - callback from AVCaptureOutput is posted in dispatch queue you specified. I saw you perform UI updates from this callback, and that is wrong. You should perform them only in main queue. E.g.

小心 - 来自 AVCaptureOutput 的回调发布在您指定的调度队列中。我看到你从这个回调中执行 UI 更新,这是错误的。您应该只在主队列中执行它们。例如

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection
{ 
    NSLog(@"captureOutput: didOutputSampleBufferFromConnection");
    // Create a UIImage from the sample buffer data
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
    dispatch_async(dispatch_get_main_queue(), ^{
    //< Add your code here that uses the image >
        [self.imageView setImage:image];
        [self.view setNeedsDisplay];
    }
} 

回答by Timu?in

And here is a Swift version of imageFromSampleBuffer function:

这是 imageFromSampleBuffer 函数的 Swift 版本:

func imageFromSampleBuffer(sampleBuffer:CMSampleBuffer!) -> UIImage {
    let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
    CVPixelBufferLockBaseAddress(imageBuffer, 0)

    let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
    let width = CVPixelBufferGetWidth(imageBuffer)
    let height = CVPixelBufferGetHeight(imageBuffer)

    let colorSpace = CGColorSpaceCreateDeviceRGB()

    let bitmapInfo:CGBitmapInfo = [.ByteOrder32Little, CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)]
    let context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, bitmapInfo.rawValue)

    let quartzImage = CGBitmapContextCreateImage(context)
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0)

    let image = UIImage(CGImage: quartzImage!)
    return image
}

Above working for me with following video settings:

以上使用以下视频设置为我工作:

videoDataOutput = AVCaptureVideoDataOutput()
videoDataOutput?.videoSettings = [kCVPixelBufferPixelFormatTypeKey:Int(kCVPixelFormatType_32BGRA)]
            videoDataOutput?.setSampleBufferDelegate(self, queue: queue)