java Android Camera2 API YUV_420_888 转 JPEG
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/40090681/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Android Camera2 API YUV_420_888 to JPEG
提问by Volodymyr Kulyk
I'm getting preview frames using OnImageAvailableListener:
我正在使用OnImageAvailableListener获取预览帧:
@Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
try {
image = reader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
byte[] data = new byte[buffer.capacity()];
buffer.get(data);
//data.length=332803; width=3264; height=2448
Log.e(TAG, "data.length=" + data.length + "; width=" + image.getWidth() + "; height=" + image.getHeight());
//TODO data processing
} catch (Exception e) {
e.printStackTrace();
}
if (image != null) {
image.close();
}
}
Each time length of data is different but image width and height are the same.
Main problem: data.length
is too small for such resolution as 3264x2448.
Size of data array should be 3264*2448=7,990,272, not 300,000 - 600,000.
What is wrong?
每次数据长度不同但图像宽度和高度相同。
主要问题:data.length
对于 3264x2448 这样的分辨率来说太小了。
数据数组的大小应该是 3264*2448=7,990,272,而不是 300,000 - 600,000。
怎么了?
imageReader = ImageReader.newInstance(3264, 2448, ImageFormat.JPEG, 5);
回答by Volodymyr Kulyk
I solved this problem by using YUV_420_888image format and converting it to JPEGimage format manually.
我通过使用YUV_420_888图像格式并手动将其转换为JPEG图像格式解决了这个问题。
imageReader = ImageReader.newInstance(MAX_PREVIEW_WIDTH, MAX_PREVIEW_HEIGHT,
ImageFormat.YUV_420_888, 5);
imageReader.setOnImageAvailableListener(this, null);
Surface imageSurface = imageReader.getSurface();
List<Surface> surfaceList = new ArrayList<>();
//...add other surfaces
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(imageSurface);
surfaceList.add(imageSurface);
cameraDevice.createCaptureSession(surfaceList,
new CameraCaptureSession.StateCallback() {
//...implement onConfigured, onConfigureFailed for StateCallback
}, null);
@Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
if (image != null) {
//converting to JPEG
byte[] jpegData = ImageUtils.imageToByteArray(image);
//write to file (for example ..some_path/frame.jpg)
FileManager.writeFrame(FILE_NAME, jpegData);
image.close();
}
}
public final class ImageUtil {
public static byte[] imageToByteArray(Image image) {
byte[] data = null;
if (image.getFormat() == ImageFormat.JPEG) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
data = new byte[buffer.capacity()];
buffer.get(data);
return data;
} else if (image.getFormat() == ImageFormat.YUV_420_888) {
data = NV21toJPEG(
YUV_420_888toNV21(image),
image.getWidth(), image.getHeight());
}
return data;
}
private static byte[] YUV_420_888toNV21(Image image) {
byte[] nv21;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
nv21 = new byte[ySize + uSize + vSize];
//U and V are swapped
yBuffer.get(nv21, 0, ySize);
vBuffer.get(nv21, ySize, vSize);
uBuffer.get(nv21, ySize + vSize, uSize);
return nv21;
}
private static byte[] NV21toJPEG(byte[] nv21, int width, int height) {
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuv = new YuvImage(nv21, ImageFormat.NV21, width, height, null);
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
return out.toByteArray();
}
}
public final class FileManager {
public static void writeFrame(String fileName, byte[] data) {
try {
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(fileName));
bos.write(data);
bos.flush();
bos.close();
// Log.e(TAG, "" + data.length + " bytes have been written to " + filesDir + fileName + ".jpg");
} catch (IOException e) {
e.printStackTrace();
}
}
}
回答by uelordi
I am not sure, but I think you are taking only one of the plane of the YUV_420_888 format (luminance part).
我不确定,但我认为您只拍摄了 YUV_420_888 格式(亮度部分)的平面之一。
In my case, I usually transform my image to byte[] in this way.
就我而言,我通常以这种方式将图像转换为 byte[]。
Image m_img;
Log.v(LOG_TAG,"Format -> "+m_img.getFormat());
Image.Plane Y = m_img.getPlanes()[0];
Image.Plane U = m_img.getPlanes()[1];
Image.Plane V = m_img.getPlanes()[2];
int Yb = Y.getBuffer().remaining();
int Ub = U.getBuffer().remaining();
int Vb = V.getBuffer().remaining();
data = new byte[Yb + Ub + Vb];
//your data length should be this byte array length.
Y.getBuffer().get(data, 0, Yb);
U.getBuffer().get(data, Yb, Ub);
V.getBuffer().get(data, Yb+ Ub, Vb);
final int width = m_img.getWidth();
final int height = m_img.getHeight();
And I use this byte buffer to transform to rgb.
我使用这个字节缓冲区转换为 rgb。
Hope this helps.
希望这可以帮助。
Cheers. Unai.
干杯。乌奈。
回答by Eddy Talvala
Your code is requesting JPEG-format images, which are compressed. They'll change in size for every frame, and they'll be much smaller than the uncompressed image. If you want to do nothing besides save JPEG images, you can just save what you have in the byte[] data to disk and you're done.
您的代码正在请求压缩的 JPEG 格式图像。每一帧它们的大小都会改变,并且它们会比未压缩的图像小得多。如果除了保存 JPEG 图像之外什么都不做,您只需将 byte[] 数据中的内容保存到磁盘即可。
If you want to actually do something with the JPEG, you can use BitmapFactory.decodeByteArray() to convert it to a Bitmap, for example, though that's pretty inefficient.
例如,如果您想对 JPEG 进行实际操作,您可以使用 BitmapFactory.decodeByteArray() 将其转换为位图,尽管这样效率很低。
Or you can switch to YUV, which is more efficient, but you need to do more work to get a Bitmap out of it.
或者,您可以切换到 YUV,这样效率更高,但是您需要做更多的工作才能从中获取 Bitmap。