C++ 在openGL中设置每秒最大帧数
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/3294972/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
setting max frames per second in openGL
提问by Raven
Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific? I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.
有没有办法计算应该进行多少更新才能达到所需的帧速率,而不是特定于系统的?我发现对于 Windows,但我想知道 openGL 本身是否存在这样的东西。它应该是某种计时器。
Or how else can I prevent FPS to drop or raise dramatically? For this time I'm testing it on drawing big number of vertices in line, and using fraps I can see frame rate to go from 400 to 200 fps with evident slowing down of drawing it.
或者我还能如何防止 FPS 大幅下降或提高?这次我正在测试它绘制大量的顶点,使用 fraps 我可以看到帧速率从 400 到 200 fps,绘制速度明显减慢。
回答by Edison Gustavo Muenz
You have two different ways to solve this problem:
你有两种不同的方法来解决这个问题:
Suppose that you have a variable called
maximum_fps
, which contains for the maximum number of frames you want to display.Then You measure the amount of time spent on the last frame (a timer will do)
Now suppose that you said that you wanted a maximum of 60FPS on your application. Then you want that the time measured be no lower than 1/60. If the time measured s lower, then you call
sleep()
to reach the amount of time left for a frame.Or you can have a variable called
tick
, that contains the current "game time" of the application. With the same timer, you will incremented it at each main loop of your application. Then, on your drawing routines you calculate the positions based on thetick
var, since it contains the current time of the application.The big advantage of option
2
is that your application will be much easier to debug, since you can play around with thetick
variable, go forward and back in time whenever you want. This is a bigplus.
假设您有一个名为 的变量
maximum_fps
,其中包含要显示的最大帧数。然后你测量在最后一帧上花费的时间(计时器会做)
现在假设您说您的应用程序希望最大 60FPS。那么您希望测量的时间不低于 1/60。如果测量的时间更短,则您调用
sleep()
以达到一帧剩余的时间量。或者您可以有一个名为 的变量
tick
,其中包含应用程序的当前“游戏时间”。使用相同的计时器,您将在应用程序的每个主循环中递增它。然后,在您的绘图例程中,您根据tick
var计算位置,因为它包含应用程序的当前时间。option 的一大优点
2
是您的应用程序将更容易调试,因为您可以随意使用tick
变量,随时前进和后退。这是一个很大的优势。
回答by young
Rule #1. Do not make update() or loop() kind of functions rely on how often it gets called.
规则1。不要让 update() 或 loop() 类型的函数依赖于它被调用的频率。
You can't really get your desired FPS. You could try to boost it by skipping some expensive operations or slow it down by calling sleep() kind of functions. However, even with those techniques, FPS will be almost always different from the exact FPS you want.
您无法真正获得所需的 FPS。您可以尝试通过跳过一些昂贵的操作来提升它,或者通过调用 sleep() 类型的函数来降低它的速度。但是,即使使用这些技术,FPS 几乎总是与您想要的确切 FPS 不同。
The common way to deal with this problem is using elapsed time from previous update. For example,
处理此问题的常用方法是使用上次更新的经过时间。例如,
// Bad
void enemy::update()
{
position.x += 10; // this enemy moving speed is totally up to FPS and you can't control it.
}
// Good
void enemy::update(elapsedTime)
{
position.x += speedX * elapsedTime; // Now, you can control its speedX and it doesn't matter how often it gets called.
}
回答by SigTerm
Is there any way to calculate how much updates should be made to reach desired frame rate, NOT system specific?
有没有办法计算应该进行多少更新才能达到所需的帧速率,而不是特定于系统的?
No.
不。
There is no way to precisely calculate how many updates should be called to reach desired framerate.
无法精确计算应调用多少次更新以达到所需的帧率。
However, you can measure how much time has passed since last frame, calculate current framerate according to it, compare it with desired framerate, then introduce a bit of Sleeping to reduce current framerate to the desired value. Not a precise solution, but it will work.
但是,您可以测量自上一帧以来经过了多少时间,根据它计算当前帧率,将其与所需帧率进行比较,然后引入一点Sleep将当前帧率降低到所需值。不是一个精确的解决方案,但它会起作用。
I found that for windows, but I would like to know if something like this exists in openGL itself. It should be some sort of timer.
我发现对于 Windows,但我想知道 openGL 本身是否存在这样的东西。它应该是某种计时器。
OpenGL is concerned only about rendering stuff, and has nothing to do with timers. Also, using windows timers isn't a good idea. Use QueryPerformanceCounter, GetTickCount or SDL_GetTicks to measure how much time has passed, and sleep to reach desired framerate.
OpenGL 只关心渲染的东西,与定时器无关。此外,使用 Windows 计时器也不是一个好主意。使用 QueryPerformanceCounter、GetTickCount 或 SDL_GetTicks 来测量已经过去了多长时间,然后睡眠以达到所需的帧率。
Or how else can I prevent FPS to drop or raise dramatically?
或者我还能如何防止 FPS 大幅下降或提高?
You prevent FPS from raising by sleeping.
您可以通过睡眠来防止 FPS 提高。
As for preventing FPS from dropping...
至于防止FPS掉...
It is insanely broad topic. Let's see. It goes something like this: use Vertex buffer objects or display lists, profile application, do not use insanely big textures, do not use too much alpha-blending, avoid "RAW" OpenGL (glVertex3f), do not render invisible objects (even if no polygons are being drawn, processing them takes time), consider learning about BSPs or OCTrees for rendering complex scenes, in parametric surfaces and curves, do not needlessly use too many primitives (if you'll render a circle using one million polygons, nobody will notice the difference), disable vsync. In short - reduce to absolute possible minimum number of rendering calls, number of rendered polygons, number of rendered pixels, number of texels read, read every available performance documentation from NVidia, and you should get a performance boost.
这是一个非常广泛的话题。让我们来看看。它是这样的:使用顶点缓冲区对象或显示列表,配置文件应用程序, 不要使用非常大的纹理,不要使用太多的 alpha 混合,避免使用“RAW”OpenGL (glVertex3f),不要渲染不可见的对象(即使没有绘制多边形,处理它们也需要时间),考虑学习 BSP或用于渲染复杂场景的 OCTrees,在参数化曲面和曲线中,不要不必要地使用太多基元(如果您将使用一百万个多边形渲染一个圆,没有人会注意到差异),禁用 vsync。简而言之 - 减少到绝对可能的最小渲染调用次数、渲染多边形数量、渲染像素数量、读取的纹素数量,从 NVidia 阅读所有可用的性能文档,您应该会获得性能提升。
回答by zester
You absolutely do wan't to throttle your frame-rate it all depends on what you got going on in that rendering loop and what your application does. Especially with it's Physics/Network related. Or if your doing any type of graphics processing with an out side toolkit (Cairo, QPainter, Skia, AGG, ...) unless you want out of sync results or 100% cpu usage.
你绝对不想限制你的帧率,这完全取决于你在渲染循环中发生了什么以及你的应用程序做了什么。特别是与物理/网络相关。或者,如果您使用外部工具包(Cairo、QPainter、Skia、AGG 等)进行任何类型的图形处理,除非您想要不同步的结果或 100% 的 CPU 使用率。
回答by Thomas Poole
Here is a similar question, with my answer and worked example
I also like deft_code's answer, and will be looking into adding what he suggests to my solution.
我也喜欢 deft_code 的回答,并且会考虑将他的建议添加到我的解决方案中。
The crucial part of my answer is:
我的回答的关键部分是:
If you're thinking about slowing down AND speeding up frames, you have to think carefully about whether you mean rendering or animation frames in each case. In this example, render throttling for simple animations is combined with animation acceleration, for any cases when frames might be dropped in a potentially slow animation.
如果您正在考虑减慢和加速帧,则必须仔细考虑在每种情况下您是指渲染帧还是动画帧。在此示例中,简单动画的渲染节流与动画加速相结合,适用于在可能缓慢的动画中丢帧的任何情况。
The example is for animation code that renders at the same speed regardless of whether benchmarking mode, or fixed FPS mode, is active. An animation triggered before the change even keeps a constant speed after the change.
该示例适用于无论基准测试模式或固定 FPS 模式是否处于活动状态,都以相同速度呈现的动画代码。更改前触发的动画甚至在更改后保持恒定速度。
回答by jiasli
This code may do the job, roughly.
粗略地,这段代码可以完成这项工作。
static int redisplay_interval;
void timer(int) {
glutPostRedisplay();
glutTimerFunc(redisplay_interval, timer, 0);
}
void setFPS(int fps)
{
redisplay_interval = 1000 / fps;
glutTimerFunc(redisplay_interval, timer, 0);
}
回答by deft_code
You're asking the wrong question. Your monitor will only ever display at 60 fps (50 fps in Europe, or possibly 75 fps if you're a pro-gamer).
你问错了问题。您的显示器只能以 60 fps(欧洲为 50 fps,如果您是职业游戏玩家则可能为 75 fps)显示。
Instead you should be seeking to lock your fps at 60 or 30. There are OpenGL extensions that allow you to do that. However the extensions are not cross platform (luckily they are not video card specific or it'd get really scary).
相反,您应该寻求将 fps 锁定在 60 或 30。有 OpenGL 扩展允许您这样做。然而,扩展不是跨平台的(幸运的是它们不是特定于视频卡的,否则会变得非常可怕)。
- windows:
wglSwapIntervalEXT
- x11 (linux):
glXSwapIntervalSGI
- max os x: ?
- 窗户:
wglSwapIntervalEXT
- x11(Linux):
glXSwapIntervalSGI
- 最大操作系统 x: ?
These extensions are closely tied to your monitor's v-sync. Once enabled calls to swap the OpenGL back-buffer will block until the monitor is ready for it. This is like putting a sleep in your code to enforce 60 fps (or 30, or 15, or some other number if you're not using a monitor which displays at 60 Hz). The difference it the "sleep"is always perfectly timed instead of an educated guess based on how long the last frame took.
这些扩展与显示器的v-sync密切相关。一旦启用对交换 OpenGL 后台缓冲区的调用将阻塞,直到监视器准备好它。这就像在您的代码中放入睡眠以强制执行 60 fps(或 30、15 或其他一些数字,如果您不使用以 60 Hz 显示的显示器)。不同之处在于“睡眠”总是完美计时,而不是基于最后一帧花费的时间进行有根据的猜测。