C++ Windows 上最快的屏幕捕获方法
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/5069104/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Fastest method of screen capturing on Windows
提问by someguy
I want to write a screencasting program for the Windows platform, but am unsure of how to capture the screen. The only method I'm aware of is to use GDI, but I'm curious whether there are other ways to go about this, and, if there are, which incurs the least overhead? Speed is a priority.
我想为 Windows 平台编写一个截屏程序,但不确定如何截屏。我知道的唯一方法是使用 GDI,但我很好奇是否有其他方法可以解决这个问题,如果有,哪种方法产生的开销最少?速度是重中之重。
The screencasting program will be for recording game footage, although, if this does narrow down the options, I'm still open for any other suggestions that fall out of this scope. Knowledge isn't bad, after all.
截屏程序将用于录制游戏画面,不过,如果这确实缩小了选项的范围,我仍然愿意接受超出此范围的任何其他建议。毕竟,知识并不坏。
Edit: I came across this article: Various methods for capturing the screen. It has introduced me to the Windows Media API way of doing it and the DirectX way of doing it. It mentions in the Conclusion that disabling hardware acceleration could drastically improve the performance of the capture application. I'm curious as to why this is. Could anyone fill in the missing blanks for me?
编辑:我遇到了这篇文章:各种捕获屏幕的方法。它向我介绍了执行此操作的 Windows Media API 方式和执行此操作的 DirectX 方式。它在结论中提到禁用硬件加速可以显着提高捕获应用程序的性能。我很好奇这是为什么。谁能帮我填补缺失的空白?
Edit: I read that screencasting programs such as Camtasia use their own capture driver. Could someone give me an in-depth explanation on how it works, and why it is faster? I may also need guidance on implementing something like that, but I'm sure there is existing documentation anyway.
编辑:我读到诸如 Camtasia 之类的截屏程序使用它们自己的捕获驱动程序。有人能给我一个关于它是如何工作的以及为什么它更快的深入解释吗?我可能还需要有关实施类似内容的指导,但我确信无论如何都有现有的文档。
Also, I now know how FRAPS records the screen. It hooks the underlying graphics API to read from the back buffer. From what I understand, this is faster than reading from the front buffer, because you are reading from system RAM, rather than video RAM. You can read the article here.
另外,我现在知道 FRAPS 如何记录屏幕。它挂钩底层图形 API 以从后台缓冲区读取。据我了解,这比从前端缓冲区读取要快,因为您是从系统 RAM 读取,而不是从视频 RAM 读取。你可以在这里阅读这篇文章。
回答by Brandrew
This is what I use to collect single frames, but if you modify this and keep the two targets open all the time then you could "stream" it to disk using a static counter for the file name. - I can't recall where I found this, but it has been modified, thanks to whoever!
这是我用来收集单个帧的方法,但是如果您修改它并使两个目标始终处于打开状态,那么您可以使用文件名的静态计数器将其“流式传输”到磁盘。- 我不记得我在哪里找到的,但它已被修改,感谢谁!
void dump_buffer()
{
IDirect3DSurface9* pRenderTarget=NULL;
IDirect3DSurface9* pDestTarget=NULL;
const char file[] = "Pickture.bmp";
// sanity checks.
if (Device == NULL)
return;
// get the render target surface.
HRESULT hr = Device->GetRenderTarget(0, &pRenderTarget);
// get the current adapter display mode.
//hr = pDirect3D->GetAdapterDisplayMode(D3DADAPTER_DEFAULT,&d3ddisplaymode);
// create a destination surface.
hr = Device->CreateOffscreenPlainSurface(DisplayMde.Width,
DisplayMde.Height,
DisplayMde.Format,
D3DPOOL_SYSTEMMEM,
&pDestTarget,
NULL);
//copy the render target to the destination surface.
hr = Device->GetRenderTargetData(pRenderTarget, pDestTarget);
//save its contents to a bitmap file.
hr = D3DXSaveSurfaceToFile(file,
D3DXIFF_BMP,
pDestTarget,
NULL,
NULL);
// clean up.
pRenderTarget->Release();
pDestTarget->Release();
}
回答by Brandrew
EDIT: I can see that this is listed under your first edit link as "the GDI way". This is still a decent way to go even with the performance advisory on that site, you can get to 30fps easily I would think.
编辑:我可以看到这在您的第一个编辑链接下列为“GDI 方式”。即使使用该站点上的性能建议,这仍然是一个不错的方法,我认为您可以轻松达到 30fps。
From thiscomment (I have no experience doing this, I'm just referencing someone who does):
从这个评论(我没有这样做的经验,我只是引用了一个这样做的人):
HDC hdc = GetDC(NULL); // get the desktop device context
HDC hDest = CreateCompatibleDC(hdc); // create a device context to use yourself
// get the height and width of the screen
int height = GetSystemMetrics(SM_CYVIRTUALSCREEN);
int width = GetSystemMetrics(SM_CXVIRTUALSCREEN);
// create a bitmap
HBITMAP hbDesktop = CreateCompatibleBitmap( hdc, width, height);
// use the previously created device context with the bitmap
SelectObject(hDest, hbDesktop);
// copy from the desktop device context to the bitmap device context
// call this once per 'frame'
BitBlt(hDest, 0,0, width, height, hdc, 0, 0, SRCCOPY);
// after the recording is done, release the desktop context you got..
ReleaseDC(NULL, hdc);
// ..delete the bitmap you were using to capture frames..
DeleteObject(hbDesktop);
// ..and delete the context you created
DeleteDC(hDest);
I'm not saying this is the fastest, but the BitBlt
operation is generally very fast if you're copying between compatible device contexts.
我并不是说这是最快的,但是BitBlt
如果您在兼容的设备上下文之间进行复制,操作通常会非常快。
For reference, Open Broadcaster Softwareimplements something like this as part of their "dc_capture"method, although rather than creating the destination context hDest
using CreateCompatibleDC
they use an IDXGISurface1
, which works with DirectX 10+. If there is no support for this they fall back to CreateCompatibleDC
.
作为参考,Open Broadcaster Software在他们的“dc_capture”方法中实现了类似的东西,尽管不是hDest
使用CreateCompatibleDC
它们创建目标上下文,而是使用IDXGISurface1
,它适用于 DirectX 10+。如果没有对此的支持,他们会退回到CreateCompatibleDC
.
To change it to use a specific application, you need to change the first line to GetDC(game)
where game
is the handle of the game's window, and then set the right height
and width
of the game's window too.
要改变它使用一个特定的应用程序,您需要更改的第一行到GetDC(game)
这里game
是游戏的窗口的句柄,然后设置权height
和width
游戏的窗口太多。
Once you have the pixels in hDest/hbDesktop, you still need to save it to a file, but if you're doing screen capture then I would think you would want to buffer a certain number of them in memory and save to the video file in chunks, so I will not point to code for saving a static image to disk.
一旦你在 hDest/hbDesktop 中有像素,你仍然需要将它保存到一个文件中,但是如果你正在做屏幕截图,那么我认为你会想要在内存中缓冲一定数量的像素并保存到视频文件中以块为单位,所以我不会指出将静态图像保存到磁盘的代码。
回答by Hernán
I wrote a video capture software, similar to FRAPS for DirectX applications. The source code is available and my article explains the general technique. Look at http://blog.nektra.com/main/2013/07/23/instrumenting-direct3d-applications-to-capture-video-and-calculate-frames-per-second/
我写了一个视频捕捉软件,类似于 DirectX 应用程序的 FRAPS。源代码可用,我的文章解释了一般技术。看看http://blog.nektra.com/main/2013/07/23/instrumenting-direct3d-applications-to-capture-video-and-calculate-frames-per-second/
Respect to your questions related to performance,
尊重您关于性能的问题,
DirectX should be faster than GDI except when you are reading from the frontbuffer which is very slow. My approach is similar to FRAPS (reading from backbuffer). I intercept a set of methods from Direct3D interfaces.
For video recording in realtime (with minimal application impact), a fast codec is essential. FRAPS uses it's own lossless video codec. Lagarith and HUFFYUV are generic lossless video codecs designed for realtime applications. You should look at them if you want to output video files.
Another approach to recording screencasts could be to write a Mirror Driver. According to Wikipedia: When video mirroring is active, each time the system draws to the primary video device at a location inside the mirrored area, a copy of the draw operation is executed on the mirrored video device in real-time.See mirror drivers at MSDN: http://msdn.microsoft.com/en-us/library/windows/hardware/ff568315(v=vs.85).aspx.
DirectX 应该比 GDI 快,除非您从非常慢的前端缓冲区读取。我的方法类似于 FRAPS(从后台缓冲区读取)。我从 Direct3D 接口截取了一组方法。
对于实时视频录制(对应用程序影响最小),快速编解码器是必不可少的。FRAPS 使用它自己的无损视频编解码器。Lagarith 和 HUFFYUV 是为实时应用设计的通用无损视频编解码器。如果你想输出视频文件,你应该看看它们。
另一种录制截屏视频的方法是编写镜像驱动程序。根据维基百科:当视频镜像处于活动状态时,每次系统在镜像区域内的某个位置绘制到主视频设备时,都会在镜像视频设备上实时执行绘制操作的副本。请参阅 MSDN 上的镜像驱动程序:http: //msdn.microsoft.com/en-us/library/windows/hardware/ff568315(v= vs.85).aspx。
回答by bobobobo
I use d3d9 to get the backbuffer, and save that to a png file using the d3dx library:
我使用 d3d9 来获取后台缓冲区,并使用 d3dx 库将其保存到 png 文件中:
IDirect3DSurface9 *surface ; // GetBackBuffer idirect3ddevice9->GetBackBuffer(0, 0, D3DBACKBUFFER_TYPE_MONO, &surface ) ; // save the surface D3DXSaveSurfaceToFileA( "filename.png", D3DXIFF_PNG, surface, NULL, NULL ) ; SAFE_RELEASE( surface ) ;
To do this you should create your swapbuffer with
为此,您应该使用以下命令创建交换缓冲区
d3dpps.SwapEffect = D3DSWAPEFFECT_COPY ; // for screenshots.
(So you guarantee the backbuffer isn't mangled before you take the screenshot).
(因此,您可以保证在截取屏幕截图之前不会损坏后台缓冲区)。
回答by zinking
In my Impression, the GDI approach and the DX approach are different in its nature. painting using GDI applies the FLUSH method, the FLUSH approach draws the frame then clear it and redraw another frame in the same buffer, this will result in flickering in games require high frame rate.
在我的印象中,GDI 方法和 DX 方法本质上是不同的。使用GDI绘画采用FLUSH方式,FLUSH方式先画完一帧,然后清空,然后在同一个buffer中重画另一帧,这在需要高帧率的游戏中会造成闪烁。
- WHY DX quicker? in DX (or graphics world), a more mature method called double buffer rendering is applied, where two buffers are present, when present the front buffer to the hardware, you can render to the other buffer as well, then after the frame 1 is finished rendering, the system swap to the other buffer( locking it for presenting to hardware , and release the previous buffer ), in this way the rendering inefficiency is greatly improved.
- WHY turning down hardware acceleration quicker? although with double buffer rendering, the FPS is improved, but the time for rendering is still limited. modern graphic hardware usually involves a lot of optimization during rendering typically like anti-aliasing, this is very computation intensive, if you don't require that high quality graphics, of course you can just disable this option. and this will save you some time.
- 为什么DX更快?在DX(或图形世界)中,应用了一种更成熟的方法,称为双缓冲区渲染,其中存在两个缓冲区,当将前缓冲区呈现给硬件时,您也可以渲染到另一个缓冲区,然后在第1帧之后渲染完成后,系统切换到另一个缓冲区(锁定它以呈现给硬件,并释放之前的缓冲区),这样大大改善了渲染的低效率。
- 为什么要更快地关闭硬件加速?虽然采用双缓冲渲染,FPS有所提升,但渲染时间依然有限。现代图形硬件通常在渲染过程中涉及大量优化,例如抗锯齿,这是非常计算密集的,如果您不需要高质量的图形,当然您可以禁用此选项。这将为您节省一些时间。
I think what you really need is a replay system, which I totally agree with what people discussed.
我认为您真正需要的是重播系统,我完全同意人们讨论的内容。
回答by rotanimod
I wrote a class that implemented the GDI method for screen capture. I too wanted extra speed so, after discovering the DirectX method (via GetFrontBuffer) I tried that, expecting it to be faster.
我写了一个类来实现屏幕捕获的 GDI 方法。我也想要额外的速度,所以在发现 DirectX 方法(通过 GetFrontBuffer)后,我尝试了它,希望它更快。
I was dismayed to find that GDI performs about 2.5x faster. After 100 trials capturing my dual monitor display, the GDI implementation averaged 0.65s per screen capture, while the DirectX method averaged 1.72s. So GDI is definitely faster than GetFrontBuffer, according to my tests.
我很沮丧地发现 GDI 的执行速度提高了大约 2.5 倍。经过 100 次尝试捕获我的双显示器显示,GDI 实现每个屏幕捕获平均为 0.65 秒,而 DirectX 方法平均为 1.72 秒。所以根据我的测试,GDI 肯定比 GetFrontBuffer 快。
I was unable to get Brandrew's code working to test DirectX via GetRenderTargetData. The screen copy came out purely black. However, it could copy that blank screen super fast! I'll keep tinkering with that and hope to get a working version to see real results from it.
我无法让 Brandrew 的代码通过 GetRenderTargetData 来测试 DirectX。屏幕副本出现纯黑色。但是,它可以超快地复制那个空白屏幕!我会继续修补它,并希望得到一个工作版本,以从中看到真正的结果。
回答by rogerdpack
A few things I've been able to glean: apparently using a "mirror driver" is fast though I'm not aware of an OSS one.
我已经能够收集到的一些东西:虽然我不知道 OSS 驱动程序,但显然使用“镜像驱动程序”很快。
Why is RDP so fast compared to other remote control software?
Also apparently using some convolutions of StretchRect are faster than BitBlt
显然,使用 StretchRect 的一些卷积比 BitBlt 更快
http://betterlogic.com/roger/2010/07/fast-screen-capture/comment-page-1/#comment-5193
http://betterlogic.com/roger/2010/07/fast-screen-capture/comment-page-1/#comment-5193
And the one you mentioned (fraps hooking into the D3D dll's) is probably the only way for D3D applications, but won't work with Windows XP desktop capture. So now I just wish there were a fraps equivalent speed-wise for normal desktop windows...anybody?
你提到的那个(连接到 D3D dll 的 fraps)可能是 D3D 应用程序的唯一方法,但不适用于 Windows XP 桌面捕获。所以现在我只希望普通桌面窗口有一个与速度相当的 fraps ......有人吗?
(I think with aero you might be able to use fraps-like hooks, but XP users would be out of luck).
(我认为使用 aero 您可能可以使用类似 fraps 的钩子,但 XP 用户会不走运)。
Also apparently changing screen bit depths and/or disabling hardware accel. might help (and/or disabling aero).
也明显改变屏幕位深度和/或禁用硬件加速。可能会有所帮助(和/或禁用航空)。
https://github.com/rdp/screen-capture-recorder-programincludes a reasonably fast BitBlt based capture utility, and a benchmarker as part of its install, which can let you benchmark BitBlt speeds to optimize them.
https://github.com/rdp/screen-capture-recorder-program包括一个相当快的基于 BitBlt 的捕获实用程序,以及作为其安装一部分的基准测试程序,它可以让您对 BitBlt 速度进行基准测试以优化它们。
VirtualDub also has an "opengl" screen capture module that is said to be fast and do things like change detection http://www.virtualdub.org/blog/pivot/entry.php?id=290
VirtualDub 也有一个“opengl”屏幕截图模块,据说速度很快,可以做诸如更改检测之类的事情http://www.virtualdub.org/blog/pivot/entry.php?id=290
回答by Tedd Hansen
For C++ you can use: http://www.pinvoke.net/default.aspx/gdi32/BitBlt.html
This may hower not work on all types of 3D applications/video apps. Then this linkmay be more useful as it describes 3 different methods you can use.
对于 C++,您可以使用:http:
//www.pinvoke.net/default.aspx/gdi32/BitBlt.html 这可能不适用于所有类型的 3D 应用程序/视频应用程序。那么此链接可能更有用,因为它描述了您可以使用的 3 种不同方法。
Old answer (C#):
You can use System.Drawing.Graphics.Copy, but it is not very fast.
旧答案(C#):
您可以使用System.Drawing.Graphics.Copy,但速度不是很快。
A sample project I wrote doing exactly this: http://blog.tedd.no/index.php/2010/08/16/c-image-analysis-auto-gaming-with-source/
我写的一个示例项目就是这样做的:http: //blog.tedd.no/index.php/2010/08/16/c-image-analysis-auto-gaming-with-source/
I'm planning to update this sample using a faster method like Direct3D: http://spazzarama.com/2009/02/07/screencapture-with-direct3d/
我计划使用更快的方法(如 Direct3D)更新此示例:http: //spazzarama.com/2009/02/07/screencapture-with-direct3d/
And here is a link for capturing to video: How to capture screen to be video using C# .Net?
回答by Ben Harper
With Windows 8, Microsoft introduced the Windows Desktop Duplication API. That is the officially recommended way of doing it. One nice feature it has for screencasting is that it detects window movement, so you can transmit block deltas when windows get moved around, instead of raw pixels. Also, it tells you which rectangles have changed, from one frame to the next.
在 Windows 8 中,Microsoft 引入了 Windows 桌面复制 API。这是官方推荐的做法。它用于截屏的一个很好的功能是它检测窗口移动,因此您可以在窗口移动时传输块增量,而不是原始像素。此外,它会告诉您哪些矩形发生了变化,从一帧到下一帧。
The Microsoft example code is pretty complex, but the API is actually simple and easy to use. I've put together an example project which is much simpler than the official example:
Microsoft 示例代码相当复杂,但 API 实际上简单易用。我整理了一个示例项目,它比官方示例简单得多:
https://github.com/bmharper/WindowsDesktopDuplicationSample
https://github.com/bmharper/WindowsDesktopDuplicationSample
Docs: https://docs.microsoft.com/en-gb/windows/desktop/direct3ddxgi/desktop-dup-api
文档:https: //docs.microsoft.com/en-gb/windows/desktop/direct3ddxgi/desktop-dup-api
Microsoft official example code: https://code.msdn.microsoft.com/windowsdesktop/Desktop-Duplication-Sample-da4c696a
微软官方示例代码:https: //code.msdn.microsoft.com/windowsdesktop/Desktop-Duplication-Sample-da4c696a
回答by Cayman
You can try the c++ open source project WinRobot @git, a powerful screen capturer
你可以试试 c++ 开源项目WinRobot @git,一个强大的屏幕截图工具
CComPtr<IWinRobotService> pService;
hr = pService.CoCreateInstance(__uuidof(ServiceHost) );
//get active console session
CComPtr<IUnknown> pUnk;
hr = pService->GetActiveConsoleSession(&pUnk);
CComQIPtr<IWinRobotSession> pSession = pUnk;
// capture screen
pUnk = 0;
hr = pSession->CreateScreenCapture(0,0,1280,800,&pUnk);
// get screen image data(with file mapping)
CComQIPtr<IScreenBufferStream> pBuffer = pUnk;
Support :
支持 :
- UAC Window
- Winlogon
- DirectShowOverlay
- UAC 窗口
- 登录
- 直接显示覆盖