ios 如何从 CMSampleBufferRef 获取字节,通过网络发送

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/6189409/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-30 20:10:11  来源:igfitidea点击:

How to get Bytes from CMSampleBufferRef , To Send Over Network

iosvideo-captureavfoundationvideo-processingcore-video

提问by Asta ni enohpi

Am Captuing video using AVFoundation frame work .With the help of Apple Documentation http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html%23//apple_ref/doc/uid/TP40010188-CH5-SW2

我正在使用 AVFoundation 框架捕获视频。借助 Apple 文档http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html%23//apple_ref/doc/ UID / TP40010188-CH5-SW2

Now i did Following things

现在我做了以下事情

1.Created videoCaptureDevice
2.Created AVCaptureDeviceInputand set videoCaptureDevice
3.Created AVCaptureVideoDataOutputand implemented Delegate
4.Created AVCaptureSession- set input as AVCaptureDeviceInput and set output as AVCaptureVideoDataOutput

1.Created videoCaptureDevice
2.CreatedAVCaptureDeviceInput并设置videoCaptureDevice
3.CreatedAVCaptureVideoDataOutput和实现代表
4.Created AVCaptureSession-组输入作为AVCaptureDeviceInput和作为AVCaptureVideoDataOutput组输出

5.In AVCaptureVideoDataOutput Delegate method

5.在 AVCaptureVideoDataOutput Delegate 方法中

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection

i got CMSamplebuffer and Converted into UIImage And tested to print UIImageview using

我得到了 CMSamplebuffer 并转换为 UIImage 并测试使用

[self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];

Every thing went well up to this........

一切都很顺利……

MY Problem IS, I need to send video frames through UDP Socket .even though following one is bad idea i tried ,UIImage to NSData and Send via UDP Pocket. BUt got so Delay in video Processing.Mostly problem because of UIImage to NSDate

我的问题是,我需要通过 UDP 套接字发送视频帧。尽管我尝试了以下一个是个坏主意,将 UIImage 发送到 NSData 并通过 UDP Pocket 发送。但是在视频处理中出现延迟。主要是因为 UIImage 到 NSDate 的问题

So Please GIve me Solution For my problem

所以请给我解决我的问题

1)Any way to convert CMSampleBUffer or CVImageBuffer to NSData ??
2)Like Audio Queue Service and Queue for Video to store UIImage and do UIImage to NSDate And Sending ???

1)有什么方法可以将 CMSampleBUffer 或 CVImageBuffer 转换为 NSData ?
2)像音频队列服务和视频队列来存储 UIImage 和做 UIImage 到 NSDate 和发送???

if am riding behind the Wrong Algorithm Please path me in write direction

如果我落后于错误的算法,请引导我写方向

Thanks In Advance

提前致谢

回答by Steve McFarlin

Here is code to get at the buffer. This code assumes a flat image (e.g. BGRA).

这是获取缓冲区的代码。此代码假定一个平面图像(例如 BGRA)。

NSData* imageToBuffer( CMSampleBufferRef source) {
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
    CVPixelBufferLockBaseAddress(imageBuffer,0);

    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);

    NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];

    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    return [data autorelease];
}

A more efficient approach would be to use a NSMutableData or a buffer pool.

更有效的方法是使用 NSMutableData 或缓冲池。

Sending a 480x360 image every second will require a 4.1Mbps connection assuming 3 color channels.

假设 3 个颜色通道,每秒发送 480x360 图像将需要 4.1Mbps 连接。

回答by Rhythmic Fistman

Use CMSampleBufferGetImageBufferto get CVImageBufferReffrom the sample buffer, then get the bitmap data from it with CVPixelBufferGetBaseAddress. This avoids needlessly copying the image.

使用CMSampleBufferGetImageBufferCVImageBufferRef从样本缓冲区中获取,然后使用 从中获取位图数据CVPixelBufferGetBaseAddress。这避免了不必要的复制图像。