如何在 Windows(ffmpeg 等)中使用 Java 快速截取桌面屏幕截图?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/24668407/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to take a screenshot of desktop fast with Java in Windows (ffmpeg, etc.)?
提问by Setsuna
I would like to use java to take a screenshot of my machine using FFMPEG or some other solution. I know linux works with ffmpeg without JNI, but running it in Windows does not work and may require (JNI?) is there any sample of some simple Java class (and anything else necessary) to capture a screenshot runnable in a windows environment? Is there some alternative to FFMPEG? I want to take screenshot at a rate faster than the Java Robot API, which I have found to work at taking screenshots, but is slower than I would like.
我想使用 java 使用 FFMPEG 或其他一些解决方案来截取我的机器的屏幕截图。我知道 linux 可以在没有 JNI 的情况下与 ffmpeg 一起使用,但是在 Windows 中运行它不起作用,并且可能需要(JNI?)是否有一些简单的 Java 类(以及其他任何必要的)示例来捕获可在 Windows 环境中运行的屏幕截图?有什么替代 FFMPEG 的方法吗?我想以比 Java Robot API 更快的速度截取屏幕截图,我发现它可以截取屏幕截图,但比我想要的要慢。
I know in Linux this works very fast:
我知道在 Linux 中这工作得非常快:
import com.googlecode.javacv.*;
public class ScreenGrabber {
public static void main(String[] args) throws Exception {
int x = 0, y = 0, w = 1024, h = 768;
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(":0.0+" + x + "," + y);
grabber.setFormat("x11grab");
grabber.setImageWidth(w);
grabber.setImageHeight(h);
grabber.start();
CanvasFrame frame = new CanvasFrame("Screen Capture");
while (frame.isVisible()) {
frame.showImage(grabber.grab());
}
frame.dispose();
grabber.stop();
}
This does not work in windows environment. Am not sure if there is some way I could use this same code, but use javacpp to actually get it working without having to change much of the above code.
这在 windows 环境中不起作用。我不确定是否有某种方法可以使用相同的代码,但使用 javacpp 使其实际工作而无需更改上述大部分代码。
Goal is to take screenshots of screen fast, but then stop after it takes a screenshot that is "different", aka. screen changed because of some event like, a window is window closed, etc.
目标是快速截取屏幕截图,但在截取“不同”的屏幕截图后停止。屏幕因某些事件而改变,例如窗口关闭等。
采纳答案by flotothemoon
Using the built-in Robots classis way easier than other Java libraries and should probably fit your needs.
使用内置的Robots 类比其他 Java 库要容易得多,应该可以满足您的需求。
If you need a smooth video with >= 30fps (more than 30 screenshots per second), you should first try the Robots approach plus performance improvements there using asynchronous storing of the screenshots.
如果您需要 >= 30fps 的流畅视频(每秒超过 30 个屏幕截图),您应该首先尝试使用机器人方法以及使用异步存储屏幕截图的性能改进。
If it doesn't work for you, try using JNAand that is (even though it's more complex) almost guaranteed to work for smooth screen capturing.
如果它对您不起作用,请尝试使用JNA,这(即使它更复杂)几乎可以保证用于流畅的屏幕捕获。
Approach with Robots
使用机器人的方法
The robots class is indeed capable of doing what you want, the problem most screen capturing approaches with Robots have is the savingof the screenshots. An approach could look like that: Looping over the captureScreen() method, grabbing the screen into a BufferedImage, convert it to a byte array and save it with an asynchronous file writer to a target file after adding the future reference of your image to the ArrayList to be able to keep going while storing the image data.
机器人类确实能够做你想做的事,大多数机器人屏幕捕获方法的问题是保存屏幕截图。一种方法可能如下所示:循环 captureScreen() 方法,将屏幕抓取到 BufferedImage,将其转换为字节数组,并在将图像的未来引用添加到目标文件后,使用异步文件编写器将其保存到目标文件ArrayList 能够在存储图像数据的同时继续前进。
// Pseudo code
while (capturing)
{
grab bufferedImage (screenCapture) from screen
convert bufferImage to byte array
start asynchronous file channel to write to the output file
and add the future reference (return value) to the ArrayList
}
Approach with JNA
使用 JNA 的方法
Original Question: How to take screenshots fast in Java?
原始问题: 如何在 Java 中快速截取屏幕截图?
As it is bad practice to just link, I will post the example here:
由于仅链接是不好的做法,我将在此处发布示例:
import java.awt.Rectangle;
import java.awt.image.BufferedImage;
import java.awt.image.ColorModel;
import java.awt.image.DataBuffer;
import java.awt.image.DataBufferInt;
import java.awt.image.DataBufferUShort;
import java.awt.image.DirectColorModel;
import java.awt.image.Raster;
import java.awt.image.WritableRaster;
import com.sun.jna.Native;
import com.sun.jna.platform.win32.W32API;
import com.sun.jna.win32.W32APIOptions;
public class JNAScreenShot
{
public static BufferedImage getScreenshot(Rectangle bounds)
{
W32API.HDC windowDC = GDI.GetDC(USER.GetDesktopWindow());
W32API.HBITMAP outputBitmap = GDI.CreateCompatibleBitmap(windowDC, bounds.width, bounds.height);
try
{
W32API.HDC blitDC = GDI.CreateCompatibleDC(windowDC);
try
{
W32API.HANDLE oldBitmap = GDI.SelectObject(blitDC, outputBitmap);
try
{
GDI.BitBlt(blitDC, 0, 0, bounds.width, bounds.height, windowDC, bounds.x, bounds.y, GDI32.SRCCOPY);
}
finally
{
GDI.SelectObject(blitDC, oldBitmap);
}
GDI32.BITMAPINFO bi = new GDI32.BITMAPINFO(40);
bi.bmiHeader.biSize = 40;
boolean ok = GDI.GetDIBits(blitDC, outputBitmap, 0, bounds.height, (byte[]) null, bi, GDI32.DIB_RGB_COLORS);
if (ok)
{
GDI32.BITMAPINFOHEADER bih = bi.bmiHeader;
bih.biHeight = -Math.abs(bih.biHeight);
bi.bmiHeader.biCompression = 0;
return bufferedImageFromBitmap(blitDC, outputBitmap, bi);
}
else
{
return null;
}
}
finally
{
GDI.DeleteObject(blitDC);
}
}
finally
{
GDI.DeleteObject(outputBitmap);
}
}
private static BufferedImage bufferedImageFromBitmap(GDI32.HDC blitDC, GDI32.HBITMAP outputBitmap, GDI32.BITMAPINFO bi)
{
GDI32.BITMAPINFOHEADER bih = bi.bmiHeader;
int height = Math.abs(bih.biHeight);
final ColorModel cm;
final DataBuffer buffer;
final WritableRaster raster;
int strideBits = (bih.biWidth * bih.biBitCount);
int strideBytesAligned = (((strideBits - 1) | 0x1F) + 1) >> 3;
final int strideElementsAligned;
switch (bih.biBitCount)
{
case 16:
strideElementsAligned = strideBytesAligned / 2;
cm = new DirectColorModel(16, 0x7C00, 0x3E0, 0x1F);
buffer = new DataBufferUShort(strideElementsAligned * height);
raster = Raster.createPackedRaster(buffer, bih.biWidth, height, strideElementsAligned, ((DirectColorModel) cm).getMasks(), null);
break;
case 32:
strideElementsAligned = strideBytesAligned / 4;
cm = new DirectColorModel(32, 0xFF0000, 0xFF00, 0xFF);
buffer = new DataBufferInt(strideElementsAligned * height);
raster = Raster.createPackedRaster(buffer, bih.biWidth, height, strideElementsAligned, ((DirectColorModel) cm).getMasks(), null);
break;
default:
throw new IllegalArgumentException("Unsupported bit count: " + bih.biBitCount);
}
final boolean ok;
switch (buffer.getDataType())
{
case DataBuffer.TYPE_INT:
{
int[] pixels = ((DataBufferInt) buffer).getData();
ok = GDI.GetDIBits(blitDC, outputBitmap, 0, raster.getHeight(), pixels, bi, 0);
}
break;
case DataBuffer.TYPE_USHORT:
{
short[] pixels = ((DataBufferUShort) buffer).getData();
ok = GDI.GetDIBits(blitDC, outputBitmap, 0, raster.getHeight(), pixels, bi, 0);
}
break;
default:
throw new AssertionError("Unexpected buffer element type: " + buffer.getDataType());
}
if (ok)
{
return new BufferedImage(cm, raster, false, null);
}
else
{
return null;
}
}
private static final User32 USER = User32.INSTANCE;
private static final GDI32 GDI = GDI32.INSTANCE;
}
interface GDI32 extends com.sun.jna.platform.win32.GDI32
{
GDI32 INSTANCE = (GDI32) Native.loadLibrary(GDI32.class);
boolean BitBlt(HDC hdcDest, int nXDest, int nYDest, int nWidth, int nHeight, HDC hdcSrc, int nXSrc, int nYSrc, int dwRop);
HDC GetDC(HWND hWnd);
boolean GetDIBits(HDC dc, HBITMAP bmp, int startScan, int scanLines, byte[] pixels, BITMAPINFO bi, int usage);
boolean GetDIBits(HDC dc, HBITMAP bmp, int startScan, int scanLines, short[] pixels, BITMAPINFO bi, int usage);
boolean GetDIBits(HDC dc, HBITMAP bmp, int startScan, int scanLines, int[] pixels, BITMAPINFO bi, int usage);
int SRCCOPY = 0xCC0020;
}
interface User32 extends com.sun.jna.platform.win32.User32
{
User32 INSTANCE = (User32) Native.loadLibrary(User32.class, W32APIOptions.UNICODE_OPTIONS);
HWND GetDesktopWindow();
}
More information and approaches
更多信息和方法
Increasing screen capture speed when using Java and awt.Robot
http://www.dreamincode.net/forums/topic/234896-faster-screen-capture/
How to get over 30FPS using Java in a Screen Capture Program?
See also
也可以看看
http://www.thepcwizard.in/2012/12/java-screen-capturing-tutorial.html
http://www.javalobby.org/forums/thread.jspa?threadID=16400&tstart=0
http://hiddensciencex.blogspot.co.at/2014/01/fast-screen-capture-in-java-example.html
http://www.coderanch.com/t/340180/GUI/java/efficient-screenshot-Java
http://www.javaworld.com/article/2071755/learn-java/capture-the-screen.html
http://www.thepcwizard.in/2012/12/java-screen-capturing-tutorial.html
http://www.javalobby.org/forums/thread.jspa?threadID=16400&tstart=0
http://hiddensciencex.blogspot.co.at/2014/01/fast-screen-capture-in-java-example.html
http://www.coderanch.com/t/340180/GUI/java/efficient-screenshot-Java
http://www.javaworld.com/article/2071755/learn-java/capture-the-screen.html
回答by Alex Barker
You will need to use JNI or JNA to call some combination of CreateCompatibleBitmap, XGetImage, DirectX or OpenGL to grab a screenshot and then copy some raw bitmap data back to Java. My profiling showed a speed up of about 400% over the Robot class when accessing raw bitmap data on X11. I have not tested other platforms at this time. Some very early code is available herebut I haven't had much time to work on it recently.
您将需要使用 JNI 或 JNA 调用 CreateCompatibleBitmap、XGetImage、DirectX 或 OpenGL 的某种组合来抓取屏幕截图,然后将一些原始位图数据复制回 Java。我的分析显示,在 X11 上访问原始位图数据时,速度比 Robot 类提高了约 400%。目前我还没有测试其他平台。一些非常早期的代码在这里可用,但我最近没有太多时间来处理它。
回答by H3LL0
Are you familiar with Xuggler? It uses FFmpeg for encoding and decoding. I got to know it a few months ago when I had to extract frames from a video and it worked smoothly.
你熟悉Xuggler吗?它使用 FFmpeg 进行编码和解码。几个月前,当我不得不从视频中提取帧时,我开始了解它,并且它运行顺利。
On the official website you can find some examples including one called "CaptureScreenToFile.java". For more info follow these links:
在官方网站上,您可以找到一些示例,其中包括一个名为“CaptureScreenToFile.java”的示例。有关更多信息,请访问以下链接:
http://www.xuggle.com/xuggler/
http://www.xuggle.com/xuggler/
https://github.com/artclarke/xuggle-xuggler/tree/master/src/com/xuggle/xuggler/demos
https://github.com/artclarke/xuggle-xuggler/tree/master/src/com/xuggle/xuggler/demos
回答by Roberto Andrade
According to the official ffmpeg documentationyou should be able to keep it pretty cross platform if you make the file
parameter passed to the FFmpegFrameGrabber
(which is really an input
parameter that gets passed down as the -i
option to ffmpeg) adhere to the different formats each device
expects.
根据官方的ffmpeg 文档,如果您使file
传递给的参数FFmpegFrameGrabber
(实际上是input
作为-i
ffmpeg 选项传递的参数)遵循每个人device
期望的不同格式,那么您应该能够保持它非常跨平台。
ie:
IE:
for Windows: dshow
expects -i video="screen-capture-recorder"
对于 Windows:dshow
期望-i video="screen-capture-recorder"
for OSX: avfoundation
expects -i "<screen device index>":
对于 OSX:avfoundation
期望-i "<screen device index>":
and for Linux: x11grab
expects -i :<display id>+<x>,<y>
.
对于 Linux:x11grab
期望-i :<display id>+<x>,<y>
.
So just passing those values (arguments to -i
) to the constructor and setting the format (via setFormat
) accordingly should do the trick.
因此,只需将这些值(-i
到 的参数)传递给构造函数并相应地设置格式(通过setFormat
)就可以解决问题。