如何使用 OpenGLES 2.0 在 libgdx 的背景上实时渲染 Android 的 YUV-NV21 相机图像?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/22456884/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to render Android's YUV-NV21 camera image on the background in libgdx with OpenGLES 2.0 in real-time?
提问by Ayberk ?zgür
Unlike Android, I'm relatively new to GL/libgdx. The task I need to solve, namely rendering the Android camera's YUV-NV21 preview image to the screen background inside libgdx in real time is multi-faceted. Here are the main concerns:
与 Android 不同,我对 GL/libgdx 比较陌生。我需要解决的任务是将Android相机的YUV-NV21预览图像实时渲染到libgdx内部的屏幕背景是多方面的。以下是主要问题:
Android camera's preview image is only guaranteed to be in the YUV-NV21 space (and in the similar YV12 space where U and V channels are not interleaved but grouped). Assuming that most modern devices will provide implicit RGB conversion is VERY wrong, e.g the newest Samsung Note 10.1 2014 version only provides the YUV formats. Since nothing can be drawn to the screen in OpenGL unless it is in RGB, the color space must somehow be converted.
The example in the libgdx documentation (Integrating libgdx and the device camera) uses an Android surface view that is below everything to draw the image on with GLES 1.1. Since the beginning of March 2014, OpenGLES 1.x support is removed from libgdx due to being obsolete and nearly all devices now supporting GLES 2.0. If you try the same sample with GLES 2.0, the 3D objects you draw on the image will be half-transparent. Since the surface behind has nothing to do with GL, this cannot really be controlled. Disabling BLENDING/TRANSLUCENCY does not work. Therefore, rendering this image must be done purely in GL.
This has to be done in real-time, so the color space conversion must be VERY fast. Software conversion using Android bitmaps will probably be too slow.
As a side-feature, the camera image must be accessible from the Android code in order to perform other tasks than drawing it on the screen, e.g sending it to a native image processor through JNI.
Android 相机的预览图像仅保证在 YUV-NV21 空间中(以及在类似的 YV12 空间中,其中 U 和 V 通道不交错而是分组)。假设大多数现代设备将提供隐式 RGB 转换是非常错误的,例如最新的三星 Note 10.1 2014 版本仅提供 YUV 格式。因为在 OpenGL 中没有任何东西可以绘制到屏幕上,除非它是 RGB,所以必须以某种方式转换颜色空间。
libgdx 文档中的示例(集成 libgdx 和设备相机)使用 Android 表面视图,该视图位于所有内容下方,以使用 GLES 1.1 在其上绘制图像。自 2014 年 3 月开始,由于已过时并且现在几乎所有设备都支持 GLES 2.0,因此从 libgdx 中删除了对 OpenGLES 1.x 的支持。如果您使用 GLES 2.0 尝试相同的示例,您在图像上绘制的 3D 对象将是半透明的。由于后面的表面与GL无关,因此无法真正控制。禁用 BLENDING/TRANSLUCENCY 不起作用。因此,渲染此图像必须完全在 GL 中完成。
这必须实时完成,因此颜色空间转换必须非常快。使用 Android 位图的软件转换可能会太慢。
作为附带功能,相机图像必须可从 Android 代码访问,以便执行除在屏幕上绘制之外的其他任务,例如通过 JNI 将其发送到本机图像处理器。
The question is, how is this task done properly and as fast as possible?
问题是,这项任务如何正确且尽可能快地完成?
回答by Ayberk ?zgür
The short answer is to load the camera image channels (Y,UV) into textures and draw these textures onto a Mesh using a custom fragment shader that will do the color space conversion for us. Since this shader will be running on the GPU, it will be much faster than CPU and certainly much much faster than the Java code. Since this mesh is part of GL, any other 3D shapes or sprites can be safely drawn over or under it.
简短的回答是将相机图像通道(Y,UV)加载到纹理中,并使用自定义片段着色器将这些纹理绘制到网格上,该着色器将为我们进行颜色空间转换。由于此着色器将在 GPU 上运行,因此它会比 CPU 快得多,当然也比 Java 代码快得多。由于此网格是 GL 的一部分,因此可以安全地在其上方或下方绘制任何其他 3D 形状或精灵。
I solved the problem starting from this answer https://stackoverflow.com/a/17615696/1525238. I understood the general method using the following link: How to use camera view with OpenGL ES, it is written for Bada but the principles are the same. The conversion formulas there were a bit weird so I replaced them with the ones in the Wikipedia article YUV Conversion to/from RGB.
我从这个答案https://stackoverflow.com/a/17615696/1525238开始解决了这个问题。我使用以下链接了解了一般方法:How to use camera view with OpenGL ES,它是为Bada编写的,但原理是相同的。那里的转换公式有点奇怪,所以我用维基百科文章YUV Conversion to/from RGB 中的公式替换了它们。
The following are the steps leading to the solution:
以下是导致解决方案的步骤:
YUV-NV21 explanation
YUV-NV21说明
Live images from the Android camera are preview images. The default color space (and one of the two guaranteed color spaces) is YUV-NV21 for camera preview. The explanation of this format is very scattered, so I'll explain it here briefly:
来自 Android 相机的实时图像是预览图像。用于相机预览的默认色彩空间(以及两个保证色彩空间之一)是 YUV-NV21。这种格式的解释很分散,这里简单解释一下:
The image data is made of (width x height) x 3/2bytes. The first width x heightbytes are the Y channel, 1 brightness byte for each pixel. The following (width / 2) x (height / 2) x 2 = width x height / 2bytes are the UV plane. Each two consecutive bytes are the V,U (in that order according to the NV21 specification) chroma bytes for the 2 x 2 = 4original pixels. In other words, the UV plane is (width / 2) x (height / 2)pixels in size and is downsampled by a factor of 2in each dimension. In addition, the U,V chroma bytes are interleaved.
图像数据由(宽 x 高)x 3/2字节组成。第一个宽 x 高字节是 Y 通道,每个像素 1 个亮度字节。下面的(宽/2)x(高/2)x 2=宽x高/2个字节就是UV平面。每两个连续字节是2 x 2 = 4 个原始像素的 V,U(按照 NV21 规范的顺序)色度字节。换句话说,UV 平面的大小为(width / 2) x (height / 2)像素,并且在每个维度上按因子2进行下采样。此外,U、V 色度字节是交错的。
Here is a very nice image that explains the YUV-NV12, NV21 is just U,V bytes flipped:
这是一个很好的图像,解释了 YUV-NV12,NV21 只是翻转了 U、V 字节:
How to convert this format to RGB?
如何将此格式转换为RGB?
As stated in the question, this conversion would take too much time to be live if done inside the Android code. Luckily, it can be done inside a GL shader, which runs on the GPU. This will allow it to run VERY fast.
如问题中所述,如果在 Android 代码中完成,则此转换将花费太多时间才能生效。幸运的是,它可以在 GPU 上运行的 GL 着色器中完成。这将允许它运行得非常快。
The general idea is to pass the our image's channels as textures to the shader and render them in a way that does RGB conversion. For this, we have to first copy the channels in our image to buffers that can be passed to textures:
一般的想法是将我们图像的通道作为纹理传递给着色器,并以进行 RGB 转换的方式渲染它们。为此,我们必须首先将图像中的通道复制到可以传递给纹理的缓冲区中:
byte[] image;
ByteBuffer yBuffer, uvBuffer;
...
yBuffer.put(image, 0, width*height);
yBuffer.position(0);
uvBuffer.put(image, width*height, width*height/2);
uvBuffer.position(0);
Then, we pass these buffers to actual GL textures:
然后,我们将这些缓冲区传递给实际的 GL 纹理:
/*
* Prepare the Y channel texture
*/
//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();
//Y texture is (width*height) in size and each pixel is one byte;
//by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B
//components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE,
width, height, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);
//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Prepare the UV channel texture
*/
//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uvTexture.bind();
//UV texture is (width/2*height/2) in size (downsampled by 2 in
//both dimensions, each pixel corresponds to 4 pixels of the Y channel)
//and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL
//puts first byte (V) into R,G and B components and of the texture
//and the second byte (U) into the A component of the texture. That's
//why we find U and V at A and R respectively in the fragment shader code.
//Note that we could have also found V at G or B as well.
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA,
width/2, height/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE,
uvBuffer);
//Use linear interpolation when magnifying/minifying the texture to
//areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D,
GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
Next, we render the mesh we prepared earlier (covers the entire screen). The shader will take care of rendering the bound textures on the mesh:
接下来,我们渲染我们之前准备的网格(覆盖整个屏幕)。着色器将负责渲染网格上的绑定纹理:
shader.begin();
//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);
//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("uv_texture", 1);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
Finally, the shader takes over the task of rendering our textures to the mesh. The fragment shader that achieves the actual conversion looks like the following:
最后,着色器接管将我们的纹理渲染到网格的任务。实现实际转换的片段着色器如下所示:
String fragmentShader =
"#ifdef GL_ES\n" +
"precision highp float;\n" +
"#endif\n" +
"varying vec2 v_texCoord;\n" +
"uniform sampler2D y_texture;\n" +
"uniform sampler2D uv_texture;\n" +
"void main (void){\n" +
" float r, g, b, y, u, v;\n" +
//We had put the Y values of each pixel to the R,G,B components by
//GL_LUMINANCE, that's why we're pulling it from the R component,
//we could also use G or B
" y = texture2D(y_texture, v_texCoord).r;\n" +
//We had put the U and V values of each pixel to the A and R,G,B
//components of the texture respectively using GL_LUMINANCE_ALPHA.
//Since U,V bytes are interspread in the texture, this is probably
//the fastest way to use them in the shader
" u = texture2D(uv_texture, v_texCoord).a - 0.5;\n" +
" v = texture2D(uv_texture, v_texCoord).r - 0.5;\n" +
//The numbers are just YUV to RGB conversion constants
" r = y + 1.13983*v;\n" +
" g = y - 0.39465*u - 0.58060*v;\n" +
" b = y + 2.03211*u;\n" +
//We finally set the RGB color of our pixel
" gl_FragColor = vec4(r, g, b, 1.0);\n" +
"}\n";
Please note that we are accessing the Y and UV textures using the same coordinate variable v_texCoord
, this is due to v_texCoord
being between -1.0and 1.0which scales from one end of the texture to the other as opposed to actual texture pixel coordinates. This is one of the nicest features of shaders.
请注意,我们使用相同的坐标变量来访问 Y 和 UV 纹理v_texCoord
,这是因为v_texCoord
在-1.0和1.0之间,它从纹理的一端缩放到另一端,而不是实际的纹理像素坐标。这是着色器最好的特性之一。
The full source code
完整的源代码
Since libgdx is cross-platform, we need an object that can be extended differently in different platforms that handles the device camera and rendering. For example, you might want to bypass YUV-RGB shader conversion altogether if you can get the hardware to provide you with RGB images. For this reason, we need a device camera controller interface that will be implemented by each different platform:
由于 libgdx 是跨平台的,我们需要一个可以在处理设备摄像头和渲染的不同平台中进行不同扩展的对象。例如,如果您可以让硬件为您提供 RGB 图像,您可能希望完全绕过 YUV-RGB 着色器转换。出于这个原因,我们需要一个由每个不同平台实现的设备相机控制器接口:
public interface PlatformDependentCameraController {
void init();
void renderBackground();
void destroy();
}
The Android version of this interface is as follows (the live camera image is assumed to be 1280x720 pixels):
该界面的Android版本如下(现场摄像头图像假设为1280x720像素):
public class AndroidDependentCameraController implements PlatformDependentCameraController, Camera.PreviewCallback {
private static byte[] image; //The image buffer that will hold the camera image when preview callback arrives
private Camera camera; //The camera object
//The Y and UV buffers that will pass our image channel data to the textures
private ByteBuffer yBuffer;
private ByteBuffer uvBuffer;
ShaderProgram shader; //Our shader
Texture yTexture; //Our Y texture
Texture uvTexture; //Our UV texture
Mesh mesh; //Our mesh that we will draw the texture on
public AndroidDependentCameraController(){
//Our YUV image is 12 bits per pixel
image = new byte[1280*720/8*12];
}
@Override
public void init(){
/*
* Initialize the OpenGL/libgdx stuff
*/
//Do not enforce power of two texture sizes
Texture.setEnforcePotImages(false);
//Allocate textures
yTexture = new Texture(1280,720,Format.Intensity); //A 8-bit per pixel format
uvTexture = new Texture(1280/2,720/2,Format.LuminanceAlpha); //A 16-bit per pixel format
//Allocate buffers on the native memory space, not inside the JVM heap
yBuffer = ByteBuffer.allocateDirect(1280*720);
uvBuffer = ByteBuffer.allocateDirect(1280*720/2); //We have (width/2*height/2) pixels, each pixel is 2 bytes
yBuffer.order(ByteOrder.nativeOrder());
uvBuffer.order(ByteOrder.nativeOrder());
//Our vertex shader code; nothing special
String vertexShader =
"attribute vec4 a_position; \n" +
"attribute vec2 a_texCoord; \n" +
"varying vec2 v_texCoord; \n" +
"void main(){ \n" +
" gl_Position = a_position; \n" +
" v_texCoord = a_texCoord; \n" +
"} \n";
//Our fragment shader code; takes Y,U,V values for each pixel and calculates R,G,B colors,
//Effectively making YUV to RGB conversion
String fragmentShader =
"#ifdef GL_ES \n" +
"precision highp float; \n" +
"#endif \n" +
"varying vec2 v_texCoord; \n" +
"uniform sampler2D y_texture; \n" +
"uniform sampler2D uv_texture; \n" +
"void main (void){ \n" +
" float r, g, b, y, u, v; \n" +
//We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
//that's why we're pulling it from the R component, we could also use G or B
" y = texture2D(y_texture, v_texCoord).r; \n" +
//We had put the U and V values of each pixel to the A and R,G,B components of the
//texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread
//in the texture, this is probably the fastest way to use them in the shader
" u = texture2D(uv_texture, v_texCoord).a - 0.5; \n" +
" v = texture2D(uv_texture, v_texCoord).r - 0.5; \n" +
//The numbers are just YUV to RGB conversion constants
" r = y + 1.13983*v; \n" +
" g = y - 0.39465*u - 0.58060*v; \n" +
" b = y + 2.03211*u; \n" +
//We finally set the RGB color of our pixel
" gl_FragColor = vec4(r, g, b, 1.0); \n" +
"} \n";
//Create and compile our shader
shader = new ShaderProgram(vertexShader, fragmentShader);
//Create our mesh that we will draw on, it has 4 vertices corresponding to the 4 corners of the screen
mesh = new Mesh(true, 4, 6,
new VertexAttribute(Usage.Position, 2, "a_position"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord"));
//The vertices include the screen coordinates (between -1.0 and 1.0) and texture coordinates (between 0.0 and 1.0)
float[] vertices = {
-1.0f, 1.0f, // Position 0
0.0f, 0.0f, // TexCoord 0
-1.0f, -1.0f, // Position 1
0.0f, 1.0f, // TexCoord 1
1.0f, -1.0f, // Position 2
1.0f, 1.0f, // TexCoord 2
1.0f, 1.0f, // Position 3
1.0f, 0.0f // TexCoord 3
};
//The indices come in trios of vertex indices that describe the triangles of our mesh
short[] indices = {0, 1, 2, 0, 2, 3};
//Set vertices and indices to our mesh
mesh.setVertices(vertices);
mesh.setIndices(indices);
/*
* Initialize the Android camera
*/
camera = Camera.open(0);
//We set the buffer ourselves that will be used to hold the preview image
camera.setPreviewCallbackWithBuffer(this);
//Set the camera parameters
Camera.Parameters params = camera.getParameters();
params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
params.setPreviewSize(1280,720);
camera.setParameters(params);
//Start the preview
camera.startPreview();
//Set the first buffer, the preview doesn't start unless we set the buffers
camera.addCallbackBuffer(image);
}
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
//Send the buffer reference to the next preview so that a new buffer is not allocated and we use the same space
camera.addCallbackBuffer(image);
}
@Override
public void renderBackground() {
/*
* Because of Java's limitations, we can't reference the middle of an array and
* we must copy the channels in our byte array into buffers before setting them to textures
*/
//Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel
yBuffer.put(image, 0, 1280*720);
yBuffer.position(0);
//Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread
uvBuffer.put(image, 1280*720, 1280*720/2);
uvBuffer.position(0);
/*
* Prepare the Y channel texture
*/
//Set texture slot 0 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
yTexture.bind();
//Y texture is (width*height) in size and each pixel is one byte; by setting GL_LUMINANCE, OpenGL puts this byte into R,G and B components of the texture
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE, 1280, 720, 0, GL20.GL_LUMINANCE, GL20.GL_UNSIGNED_BYTE, yBuffer);
//Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Prepare the UV channel texture
*/
//Set texture slot 1 as active and bind our texture object to it
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE1);
uvTexture.bind();
//UV texture is (width/2*height/2) in size (downsampled by 2 in both dimensions, each pixel corresponds to 4 pixels of the Y channel)
//and each pixel is two bytes. By setting GL_LUMINANCE_ALPHA, OpenGL puts first byte (V) into R,G and B components and of the texture
//and the second byte (U) into the A component of the texture. That's why we find U and V at A and R respectively in the fragment shader code.
//Note that we could have also found V at G or B as well.
Gdx.gl.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_LUMINANCE_ALPHA, 1280/2, 720/2, 0, GL20.GL_LUMINANCE_ALPHA, GL20.GL_UNSIGNED_BYTE, uvBuffer);
//Use linear interpolation when magnifying/minifying the texture to areas larger/smaller than the texture size
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, GL20.GL_CLAMP_TO_EDGE);
Gdx.gl.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, GL20.GL_CLAMP_TO_EDGE);
/*
* Draw the textures onto a mesh using our shader
*/
shader.begin();
//Set the uniform y_texture object to the texture at slot 0
shader.setUniformi("y_texture", 0);
//Set the uniform uv_texture object to the texture at slot 1
shader.setUniformi("uv_texture", 1);
//Render our mesh using the shader, which in turn will use our textures to render their content on the mesh
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
}
@Override
public void destroy() {
camera.stopPreview();
camera.setPreviewCallbackWithBuffer(null);
camera.release();
}
}
The main application part just ensures that init()
is called once in the beginning, renderBackground()
is called every render cycle and destroy()
is called once in the end:
主要应用程序部分只是确保init()
在开始时调用一次,在renderBackground()
每个渲染周期destroy()
调用一次,并在最后调用一次:
public class YourApplication implements ApplicationListener {
private final PlatformDependentCameraController deviceCameraControl;
public YourApplication(PlatformDependentCameraController cameraControl) {
this.deviceCameraControl = cameraControl;
}
@Override
public void create() {
deviceCameraControl.init();
}
@Override
public void render() {
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
//Render the background that is the live camera image
deviceCameraControl.renderBackground();
/*
* Render anything here (sprites/models etc.) that you want to go on top of the camera image
*/
}
@Override
public void dispose() {
deviceCameraControl.destroy();
}
@Override
public void resize(int width, int height) {
}
@Override
public void pause() {
}
@Override
public void resume() {
}
}
The only other Android-specific part is the following extremely short main Android code, you just create a new Android specific device camera handler and pass it to the main libgdx object:
唯一的其他 Android 特定部分是以下极短的主要 Android 代码,您只需创建一个新的 Android 特定设备相机处理程序并将其传递给主 libgdx 对象:
public class MainActivity extends AndroidApplication {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration();
cfg.useGL20 = true; //This line is obsolete in the newest libgdx version
cfg.a = 8;
cfg.b = 8;
cfg.g = 8;
cfg.r = 8;
PlatformDependentCameraController cameraControl = new AndroidDependentCameraController();
initialize(new YourApplication(cameraControl), cfg);
graphics.getView().setKeepScreenOn(true);
}
}
How fast is it?
它有多快?
I tested this routine on two devices. While the measurements are not constant across frames, a general profile can be observed:
我在两个设备上测试了这个程序。虽然跨帧的测量值不是恒定的,但可以观察到一般的配置文件:
Samsung Galaxy Note II LTE - (GT-N7105): Has ARM Mali-400 MP4 GPU.
- Rendering one frame takes around 5-6 ms, with occasional jumps to around 15 ms every couple of seconds
- Actual rendering line (
mesh.render(shader, GL20.GL_TRIANGLES);
) consistently takes 0-1 ms - Creation and binding of both textures consistently take 1-3 ms in total
- ByteBuffer copies generally take 1-3 ms in total but jump to around 7ms occasionally, probably due to the image buffer being moved around in the JVM heap
Samsung Galaxy Note 10.1 2014 - (SM-P600): Has ARM Mali-T628 GPU.
- Rendering one frame takes around 2-4 ms, with rare jumps to around 6-10 ms
- Actual rendering line (
mesh.render(shader, GL20.GL_TRIANGLES);
) consistently takes 0-1 ms - Creation and binding of both textures take 1-3 ms in total but jump to around 6-9 ms every couple of seconds
- ByteBuffer copies generally take 0-2 ms in total but jump to around 6ms very rarely
三星 Galaxy Note II LTE - (GT-N7105):具有 ARM Mali-400 MP4 GPU。
- 渲染一帧大约需要 5-6 毫秒,偶尔会每几秒跳到 15 毫秒左右
- 实际渲染线 (
mesh.render(shader, GL20.GL_TRIANGLES);
) 始终需要 0-1 毫秒 - 两种纹理的创建和绑定总共需要 1-3 毫秒
- ByteBuffer 副本通常总共需要 1-3 毫秒,但偶尔会跳到 7 毫秒左右,这可能是由于图像缓冲区在 JVM 堆中移动
三星 Galaxy Note 10.1 2014 - (SM-P600):具有 ARM Mali-T628 GPU。
- 渲染一帧大约需要 2-4 毫秒,很少会跳到 6-10 毫秒左右
- 实际渲染线 (
mesh.render(shader, GL20.GL_TRIANGLES);
) 始终需要 0-1 毫秒 - 两种纹理的创建和绑定总共需要 1-3 毫秒,但每隔几秒就会跳到 6-9 毫秒左右
- ByteBuffer 副本通常总共需要 0-2 毫秒,但很少跳到 6 毫秒左右
Please don't hesitate to share if you think that these profiles can be made faster with some other method. Hope this little tutorial helped.
如果您认为使用其他方法可以更快地制作这些配置文件,请随时分享。希望这个小教程有所帮助。
回答by fky
For the fastest and most optimized way, just use the common GL Extention
对于最快和最优化的方式,只需使用通用的 GL 扩展
//Fragment Shader
#extension GL_OES_EGL_image_external : require
uniform samplerExternalOES u_Texture;
Than in Java
比在 Java 中
surfaceTexture = new SurfaceTexture(textureIDs[0]);
try {
someCamera.setPreviewTexture(surfaceTexture);
} catch (IOException t) {
Log.e(TAG, "Cannot set preview texture target!");
}
someCamera.startPreview();
private static final int GL_TEXTURE_EXTERNAL_OES = 0x8D65;
In Java GL Thread
在 Java GL 线程中
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, textureIDs[0]);
GLES20.glUniform1i(uTextureHandle, 0);
The color conversion is already done for you. You can do what ever you want right in the Fragment shader.
颜色转换已经为您完成。你可以在片段着色器中做任何你想做的事情。
At all its a none Libgdx solution since it is platform dependent. You can Initialize the Platform dependant stuff in the wraper and than send it to Libgdx Activity.
它完全不是 Libgdx 解决方案,因为它依赖于平台。您可以在包装器中初始化平台相关的东西,然后将其发送到 Libgdx 活动。
Hope that saves you some time in your research.
希望能为您节省一些研究时间。