C++ 如何在 OpenGL 上渲染离屏?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/12157646/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to render offscreen on OpenGL?
提问by Rookie
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
我的目标是将没有窗口的 OpenGL 场景直接渲染到文件中。场景可能比我的屏幕分辨率大。
How can I do this?
我怎样才能做到这一点?
I want to be able to choose the render area size to any size, for example 10000x10000, if possible?
如果可能,我希望能够将渲染区域大小选择为任意大小,例如 10000x10000?
回答by KillianDS
It all starts with glReadPixels
, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer
.
这一切都始于glReadPixels
,您将使用它来将存储在 GPU 上特定缓冲区中的像素传输到主内存 (RAM)。正如您将在文档中注意到的那样,没有选择哪个缓冲区的参数。与 OpenGL 一样,当前要读取的缓冲区是一个状态,您可以使用glReadBuffer
.
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
因此,一个非常基本的离屏渲染方法将类似于以下内容。我使用 c++ 伪代码,所以它可能包含错误,但应该使一般流程清晰:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
这将读取当前的后台缓冲区(通常是您正在绘制的缓冲区)。您应该在交换缓冲区之前调用它。请注意,您也可以使用上述方法完美读取后台缓冲区,将其清除并在交换之前绘制完全不同的内容。从技术上讲,您也可以读取前端缓冲区,但通常不鼓励这样做,因为理论上允许实现进行一些可能使您的前端缓冲区包含垃圾的优化。
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objectscome into play.
这有一些缺点。首先,我们并没有真正进行离屏渲染,对吧。我们渲染到屏幕缓冲区并从中读取。我们可以通过从不交换后台缓冲区来模拟离屏渲染,但感觉不对。除此之外,前缓冲区和后缓冲区被优化为显示像素,而不是读回它们。这就是帧缓冲对象发挥作用的地方。
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
本质上,FBO 允许您创建一个非默认的帧缓冲区(如 FRONT 和 BACK 缓冲区),允许您绘制到内存缓冲区而不是屏幕缓冲区。实际上,您可以绘制到纹理或渲染缓冲区。当您想将 OpenGL 本身中的像素重新用作纹理(例如游戏中的幼稚“安全摄像头”)时,第一个是最佳选择,如果您只想渲染/回读,后者是最佳选择。有了这个,上面的代码就会变成这样,又是伪代码,所以如果输入错误或忘记了一些语句,请不要杀我。
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER?,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER?,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER?,0);
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
这是一个简单的示例,实际上您可能还需要存储深度(和模板)缓冲区。您可能还想渲染到纹理,但我会将其留作练习。在任何情况下,您现在都将执行真正的离屏渲染,它可能比读取后台缓冲区工作得更快。
Finally, you can use pixel buffer objectsto make read pixels asynchronous. The problem is that glReadPixels
blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
最后,您可以使用像素缓冲区对象使读取像素异步。问题是glReadPixels
在像素数据完全传输之前会阻塞,这可能会使您的 CPU 停顿。使用 PBO 时,实现可能会立即返回,因为它无论如何都控制缓冲区。只有当您映射缓冲区时,管道才会阻塞。然而,PBO 可能会被优化为仅在 RAM 上缓冲数据,因此该块可能花费更少的时间。读取像素代码将变成这样:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
The part in caps is essential. If you just issue a glReadPixels
to a PBO, followed by a glMapBuffer
of that PBO, you gained nothing but a lot of code. Sure the glReadPixels
might return immediately, but now the glMapBuffer
will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
大写中的部分是必不可少的。如果您只是glReadPixels
向 PBO发出 a ,然后glMapBuffer
是该 PBO的 a ,您只会获得大量代码。当然它glReadPixels
可能会立即返回,但现在glMapBuffer
它将停止,因为它必须安全地将数据从读取缓冲区映射到 PBO 和主 RAM 中的内存块。
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
另请注意,我到处都使用 GL_BGRA,这是因为许多显卡在内部使用它作为最佳渲染格式(或没有 alpha 的 GL_BGR 版本)。它应该是像这样的像素传输最快的格式。我会试着找到我在几个月前读到的关于这个的 nvidia 文章。
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER
might not be available, you should just use GL_FRAMEBUFFER
in that case.
使用 OpenGL ES 2.0 时,GL_DRAW_FRAMEBUFFER
可能不可用,您应该GL_FRAMEBUFFER
在这种情况下使用。
回答by Nicol Bolas
I'll assume that creating a dummy window (you don't render to it; it's just there because the API requires you to make one) that you create your main context into is an acceptable implementation strategy.
我假设创建一个虚拟窗口(您不渲染它;它只是在那里,因为 API 要求您创建一个),您将主上下文创建到其中是一个可接受的实现策略。
Here are your options:
以下是您的选择:
Pixel buffers
像素缓冲区
A pixel buffer, or pbuffer (which isn't a pixel buffer object), is first and foremost an OpenGL context. Basically, you create a window as normal, then pick a pixel format from wglChoosePixelFormatARB
(pbuffer formats must be gotten from here). Then, you call wglCreatePbufferARB
, giving it your window's HDC and the pixel buffer format you want to use. Oh, and a width/height; you can query the implementation's maximum width/heights.
像素缓冲区或 pbuffer(不是像素缓冲区对象)首先是OpenGL 上下文。基本上,您可以正常创建一个窗口,然后从中选择像素格式wglChoosePixelFormatARB
(必须从此处获取 pbuffer 格式)。然后,您调用wglCreatePbufferARB
,将您的窗口的 HDC 和您要使用的像素缓冲区格式提供给它。哦,还有宽度/高度;您可以查询实现的最大宽度/高度。
The default framebuffer for pbuffer is not visible on the screen, and the max width/height is whatever the hardware wants to let you use. So you can render to it and use glReadPixels
to read back from it.
pbuffer 的默认帧缓冲区在屏幕上不可见,最大宽度/高度是硬件想要让您使用的任何值。所以你可以渲染它并使用它glReadPixels
来读取它。
You'll need to share you context with the given context if you have created objects in the window context. Otherwise, you can use the pbuffer context entirely separately. Just don't destroy the window context.
如果您在窗口上下文中创建了对象,则需要与给定的上下文共享上下文。否则,您可以完全单独使用 pbuffer 上下文。只是不要破坏窗口上下文。
The advantage here is greater implementation support (though most drivers that don't support the alternatives are also old drivers for hardware that's no longer being supported. Or is Intel hardware).
这里的优势是更大的实现支持(尽管大多数不支持替代方案的驱动程序也是不再支持的硬件的旧驱动程序。或者是英特尔硬件)。
The downsides are these. Pbuffers don't work with core OpenGL contexts. They may work for compatibility, but there is no way to give wglCreatePbufferARB
information about OpenGL versions and profiles.
缺点就是这些。Pbuffers 不适用于核心 OpenGL 上下文。它们可能适用于兼容性,但无法提供wglCreatePbufferARB
有关 OpenGL 版本和配置文件的信息。
Framebuffer Objects
帧缓冲对象
Framebuffer Objectsare more "proper" offscreen rendertargets than pbuffers. FBOs are within a context, while pbuffers are about creating new contexts.
帧缓冲对象比 pbuffer 更“合适”的屏幕外渲染目标。FBOs 在一个上下文中,而 pbuffers 是关于创建新的上下文。
FBOs are just a container for images that you render to. The maximum dimensions that the implementation allows can be queried; you can assume it to be GL_MAX_VIEWPORT_DIMS
(make sure an FBO is bound before checking this, as it changes based on whether an FBO is bound).
FBO 只是您渲染到的图像的容器。可以查询实现允许的最大维度;您可以假设它是GL_MAX_VIEWPORT_DIMS
(在检查之前确保 FBO 已绑定,因为它会根据 FBO 是否绑定而变化)。
Since you're not sampling textures from these (you're just reading values back), you should use renderbuffers instead of textures. Their maximum size may be larger than those of textures.
由于您不是从这些纹理中采样(您只是读取值),因此您应该使用渲染缓冲区而不是纹理。它们的最大尺寸可能大于纹理的最大尺寸。
The upside is the ease of use. Rather than have to deal with pixel formats and such, you just pick an appropriate image formatfor your glRenderbufferStorage
call.
好处是易于使用。您无需处理像素格式等问题,只需为您的通话选择合适的图像格式即可glRenderbufferStorage
。
The only real downside is the narrower band of hardware that supports them. In general, anything that AMD or NVIDIA makes that they still support (right now, GeForce 6xxx or better [note the number of x's], and any Radeon HD card) will have access to ARB_framebuffer_object or OpenGL 3.0+ (where it's a core feature). Older drivers may only have EXT_framebuffer_object support (which has a few differences). Intel hardware is potluck; even if they claim 3.x or 4.x support, it may still fail due to driver bugs.
唯一真正的缺点是支持它们的硬件范围较窄。一般来说,任何 AMD 或 NVIDIA 仍然支持的东西(现在,GeForce 6xxx 或更好 [注意 x 的数量],以及任何 Radeon HD 卡)都可以访问 ARB_framebuffer_object 或 OpenGL 3.0+(它是核心功能) )。较旧的驱动程序可能只有 EXT_framebuffer_object 支持(有一些差异)。英特尔硬件是家常便饭;即使他们声称支持 3.x 或 4.x,它仍然可能由于驱动程序错误而失败。
回答by genpfault
If you need to render something that exceeds the maximum FBO size of your GL implementation libtr
works pretty well:
如果您需要渲染超过 GL 实现的最大 FBO 大小的内容,则libtr
效果很好:
The TR (Tile Rendering) library is an OpenGL utility library for doing tiled rendering. Tiled rendering is a technique for generating large images in pieces (tiles).
TR is memory efficient; arbitrarily large image files may be generated without allocating a full-sized image buffer in main memory.
TR(平铺渲染)库是一个用于进行平铺渲染的 OpenGL 实用程序库。平铺渲染是一种分块(平铺)生成大图像的技术。
TR 是内存高效的;可以生成任意大的图像文件,而无需在主存储器中分配全尺寸的图像缓冲区。
回答by Andreas Brinck
The easiest way is to use something called Frame Buffer Objects (FBO). You will still have to create a window to create an opengl context though (but this window can be hidden).
最简单的方法是使用称为帧缓冲对象 (FBO) 的东西。你仍然需要创建一个窗口来创建一个 opengl 上下文(但这个窗口可以隐藏)。
回答by rtrobin
The easiest way to fulfill your goal is using FBO to do off-screen render. And you don't need to render to texture, then get the teximage. Just render to buffer and use function glReadPixels. This link will be useful. See Framebuffer Object Examples
实现目标的最简单方法是使用 FBO 进行离屏渲染。而且您不需要渲染到纹理,然后获取 teximage。只需渲染到缓冲区并使用函数 glReadPixels。这个链接会很有用。请参阅帧缓冲区对象示例