使用 Android L 和 Camera2 API 处理相机预览图像数据

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/25462277/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-20 09:26:45  来源:igfitidea点击:

Camera preview image data processing with Android L and Camera2 API

androidimage-processingcamerapreviewandroid-5.0-lollipop

提问by bubo

I'm working on an android app that is processing the input image from the camera and displays it to the user. This is fairly simple, I register a PreviewCallbackon the camera object with the setPreviewCallbackWithBuffer. This is easy and works smoothly with the old camera API

我正在开发一个 android 应用程序,该应用程序处理来自相机的输入图像并将其显示给用户。这相当简单,我PreviewCallback使用setPreviewCallbackWithBuffer. 这很容易,并且可以与旧的相机 API 一起使用

public void onPreviewFrame(byte[] data, Camera cam) {
    // custom image data processing
}

I'm trying to port my app to take advantage of the new Camera2 API and I'm not sure how exactly shall I do that. I followed the Camera2Video in L Preview samples that allows to record a video. However, there is no direct image data transfer in the sample, so I don't understand where exactly shall I get the image pixel data and how to process it.

我正在尝试移植我的应用程序以利用新的 Camera2 API,但我不确定我应该如何做到这一点。我遵循了允许录制视频的 L 预览示例中的 Camera2Video。但是,样本中没有直接的图像数据传输,所以我不明白我究竟应该从哪里获取图像像素数据以及如何处理它。

Could anybody help me or suggest the way how one can get the the functionality of PreviewCallbackin android L, or how it's possible to process preview data from the camera before displaying it to the screen? (there is no preview callback on the camera object)

任何人都可以帮助我或建议如何获得PreviewCallbackandroid L 中的功能,或者如何在将其显示到屏幕之前处理来自相机的预览数据?(相机对象上没有预览回调)

Thank you!

谢谢!

采纳答案by VP.

Since the Camera2API is very different from the current CameraAPI, it might help to go through the documentation.

由于该Camera2API 与当前的CameraAPI非常不同,因此阅读文档可能会有所帮助。

A good starting point is camera2basicexample. It demonstrates how to use Camera2API and configure ImageReaderto get JPEG images and register ImageReader.OnImageAvailableListenerto receive those images

一个好的起点是camera2basic示例。它演示了如何使用Camera2API 和配置ImageReader来获取 JPEG 图像并注册ImageReader.OnImageAvailableListener以接收这些图像

To receive preview frames, you need to add your ImageReader's surface to setRepeatingRequest's CaptureRequest.Builder.

要接收预览帧,您需要将您ImageReader的表面添加到setRepeatingRequest's CaptureRequest.Builder

Also, you should set ImageReader's format to YUV_420_888, which will give you 30fps at 8MP (The documentation guarantees 30fps at 8MP for Nexus 5).

此外,您应该将ImageReader的格式设置为YUV_420_888,这将为您提供 8MP 的 30fps(文档保证 Nexus 5 的 8MP 为 30fps)。

回答by AngeloS

Combining a few answers into a more digestible one because @VP's answer, while technically clear, is difficult to understand if it's your first time moving from Camera to Camera2:

将几个答案组合成一个更容易理解的答案,因为@VP 的答案虽然技术上很清楚,但如果这是您第一次从 Camera 移动到 Camera2,则很难理解:

Using https://github.com/googlesamples/android-Camera2Basicas a starting point, modify the following:

使用https://github.com/googlesamples/android-Camera2Basic作为起点,修改以下内容:

In createCameraPreviewSession()init a new Surfacefrom mImageReader

createCameraPreviewSession()初始化一个新SurfacemImageReader

Surface mImageSurface = mImageReader.getSurface();

Add that new surface as a output target of your CaptureRequest.Buildervariable. Using the Camera2Basic sample, the variable will be mPreviewRequestBuilder

将该新表面添加为CaptureRequest.Builder变量的输出目标。使用 Camera2Basic 示例,变量将是mPreviewRequestBuilder

mPreviewRequestBuilder.addTarget(mImageSurface);

Here's the snippet with the new lines (see my @AngeloS comments):

这是带有新行的代码段(请参阅我的@AngeloS 评论):

private void createCameraPreviewSession() {

    try {

        SurfaceTexture texture = mTextureView.getSurfaceTexture();
        assert texture != null;

        // We configure the size of default buffer to be the size of camera preview we want.
        texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());

        // This is the output Surface we need to start preview.
        Surface surface = new Surface(texture);

        //@AngeloS - Our new output surface for preview frame data
        Surface mImageSurface = mImageReader.getSurface();

        // We set up a CaptureRequest.Builder with the output Surface.
        mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);

        //@AngeloS - Add the new target to our CaptureRequest.Builder
        mPreviewRequestBuilder.addTarget(mImageSurface);

        mPreviewRequestBuilder.addTarget(surface);

        ...

Next, in setUpCameraOutputs(), change the format from ImageFormat.JPEGto ImageFormat.YUV_420_888when you init your ImageReader. (PS, I also recommend dropping your preview size for smoother operation - one nice feature of Camera2)

接下来,在setUpCameraOutputs(),格式从改变ImageFormat.JPEGImageFormat.YUV_420_888,当你初始化你的ImageReader。(PS,我还建议您降低预览尺寸以实现更流畅的操作 - Camera2 的一项不错的功能)

mImageReader = ImageReader.newInstance(largest.getWidth() / 16, largest.getHeight() / 16, ImageFormat.YUV_420_888, 2);

Finally, in your onImageAvailable()method of ImageReader.OnImageAvailableListener, be sure to use @Kamala's suggestion because the preview will stop after a few frames if you don't close it

最后,在你的onImageAvailable()方法中ImageReader.OnImageAvailableListener,一定要使用@Kamala 的建议,因为如果你不关闭预览会在几帧后停止

    @Override
    public void onImageAvailable(ImageReader reader) {

        Log.d(TAG, "I'm an image frame!");

        Image image =  reader.acquireNextImage();

        ...

        if (image != null)
            image.close();
    }

回答by Kamala

In the ImageReader.OnImageAvailableListener class, close the image after reading as shown below (this will release the buffer for next capture). You will have to handle exception on close

在 ImageReader.OnImageAvailableListener 类中,读取后关闭图像,如下所示(这将释放缓冲区以供下次捕获)。您将不得不在关闭时处理异常

      Image image =  imageReader.acquireNextImage();
      ByteBuffer buffer = image.getPlanes()[0].getBuffer();
      byte[] bytes = new byte[buffer.remaining()];
      buffer.get(bytes);
      image.close();

回答by panonski

I needed the same thing, so I used their example and added a call to a new function when the camera is in preview state.

我需要同样的东西,所以我使用了他们的例子,并在相机处于预览状态时添加了对新函数的调用。

private CameraCaptureSession.CaptureCallback mCaptureCallback
            = new CameraCaptureSession.CaptureCallback()
    private void process(CaptureResult result) {
        switch (mState) {
            case STATE_PREVIEW: {
                    if (buttonPressed){
                        savePreviewShot();
                    }
                break;
            }

The savePreviewShot()is simply a recycled version of the original captureStillPicture()adapted to use the preview template.

savePreviewShot()只是原始版本的回收版本,captureStillPicture()适用于使用预览模板。

   private void savePreviewShot(){
        try {
            final Activity activity = getActivity();
            if (null == activity || null == mCameraDevice) {
                return;
            }
            // This is the CaptureRequest.Builder that we use to take a picture.
            final CaptureRequest.Builder captureBuilder =
                    mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            captureBuilder.addTarget(mImageReader.getSurface());

            // Orientation
            int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
            captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));

            CameraCaptureSession.CaptureCallback CaptureCallback
                    = new CameraCaptureSession.CaptureCallback() {

                @Override
                public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request,
                                               TotalCaptureResult result) {
                    SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd_HH:mm:ss:SSS");
                    Date resultdate = new Date(System.currentTimeMillis());
                    String mFileName = sdf.format(resultdate);
                    mFile = new File(getActivity().getExternalFilesDir(null), "pic "+mFileName+" preview.jpg");

                    Log.i("Saved file", ""+mFile.toString());
                    unlockFocus();
                }
            };

            mCaptureSession.stopRepeating();
            mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
        } catch (Exception e) {
            e.printStackTrace();
        }
    };

回答by nhoxbypass

It's better to init ImageReaderwith max image buffer is 2then use reader.acquireLatestImage()inside onImageAvailable().

最好ImageReader使用最大图像缓冲区进行初始化,2然后reader.acquireLatestImage()在内部使用onImageAvailable()

Because acquireLatestImage()will acquire the latest Image from the ImageReader's queue, dropping older one. This function is recommended to use over acquireNextImage()for most use-cases, as it's more suited for real-timeprocessing. Note that max image buffer should be at least 2.

因为acquireLatestImage()将从 ImageReader 的队列中获取最新的图像,丢弃旧的图像。acquireNextImage()对于大多数用例,建议使用 over 这个函数,因为它更适合实时处理。请注意,最大图像缓冲区应至少为2.

And remember to close()your image after processing.

并记住close()处理后的图像。