Linux:通过网络和 VNC 帧率捕获屏幕桌面视频

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/4292535/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-05 00:08:53  来源:igfitidea点击:

Linux: Screen desktop video capture over network, and VNC framerate

linuxvncscreen-captureframe-rate

提问by sdaau

Sorry for the wall of text - TL;DR:

对不起,文字墙 - TL; DR:

  • What is the framerate of VNC connection (in frames/sec) - or rather, who determines it: client or server?
  • Any other suggestions for desktop screen capture - but "correctly timecoded"/ with unjittered framerate (with a stable period); and with possibility to obtain it as uncompressed (or lossless) image sequence?
  • VNC 连接的帧率是多少(以帧/秒为单位) - 或者更确切地说,由谁决定:客户端还是服务器?
  • 桌面屏幕捕获的任何其他建议 - 但“正确的时间编码”/具有无抖动的帧速率(具有稳定的周期);并有可能将其作为未压缩(或无损)图像序列获得?

Briefly - I have a typical problem that I am faced with: I sometimes develop hardware, and want to record a video that shows bothcommands entered on the PC ('desktop capture'), andresponses of the hardware ('live video'). A chunk of an intro follows, before I get to the specific detail(s).  
 

简单地说-我有一个典型的问题,我面对:我有时发展的硬件,并希望记录该节目的视频在PC(“桌面捕捉”)上输入的命令,以及硬件(“直播视频”)的反应. 在我了解具体细节之前,先是一段介绍。  
 

Intro/Context

介绍/背景

My strategy, for now, is to use a video camera to record the process of hardware testing (as 'live' video) - and do a desktop capture at the same time. The video camera produces a 29.97 (30) FPS MPEG-2 .AVI video; and I want to get the desktop capture as an image sequence of PNGs at the same frame rate as the video. The idea, then, would be: if the frame rate of the two videos is the same; then I could simply

目前,我的策略是使用摄像机记录硬件测试过程(作为“实时”视频)-同时进行桌面捕获。摄像机生成 29.97 (30) FPS MPEG-2 .AVI 视频;我想以与视频相同的帧速率将桌面捕获作为 PNG 的图像序列。那么,这个想法是:如果两个视频的帧速率相同;那么我可以简单地

  • align the time of start of the desktop capture, with the matching point in the 'live' video
  • Set up a picture-in-picture, where a scaled down version of the desktop capture is put - as overlay - on top of the 'live' video
    • (where a portion of the screen on the 'live' video, serves as a visual sync source with the 'desktop capture' overlay)
  • Export a 'final' combined video, compressed appropriately for the Internet
  • 将桌面捕获的开始时间与“实时”视频中的匹配点对齐
  • 设置画中画,将缩小版的桌面捕获 - 作为覆盖 - 放在“实时”视频的顶部
    • (“实时”视频上的一部分屏幕用作“桌面捕获”覆盖的视觉同步源)
  • 导出“最终”组合视频,为互联网适当压缩

In principle, I guess one could use a command line tool like ffmpegfor this process; however I would prefer to use a GUI for finding the alignment start point for the two videos.

原则上,我想可以ffmpeg在这个过程中使用像这样的命令行工具;但是我更喜欢使用 GUI 来查找两个视频的对齐起点。

Eventually, what I also want to achieve, is to preserve maximum quality when exporting the 'final' video: the 'live' video is already compressed when out of the camera, which means additional degradation when it passes through the Theora .ogv codec - which is why I'd like to keep the original videos, and use something like a command line to generate a 'final' video anew, if a different compression/resolution is required. This is also why I like to have the 'desktop capture' video as a PNG sequence (although I guess any uncompressed format would do): I take measures to 'adjust' the desktop, so there aren't many gradients, and lossless encoding (i.e. PNG) would be appropriate.  
 

最终,我还想实现的是在导出“最终”视频时保持最高质量:“实时”视频在离开相机时已经被压缩,这意味着当它通过 Theora .ogv 编解码器时会出现额外的降级 -这就是为什么我想保留原始视频,并使用命令行之类的东西重新生成“最终”视频,如果需要不同的压缩/分辨率。这也是我喜欢将“桌面捕获”视频作为 PNG 序列的原因(尽管我猜任何未压缩的格式都可以):我采取措施来“调整”桌面,因此没有太多渐变和无损编码(即PNG)将是合适的。  
 

Desktop capture options

桌面捕获选项

Well, there are many troubles in this process under Ubuntu Lucid, which I currently use (and you can read about some of my ordeals in 10.04: Video overlay/composite editing with Theora ogv - Ubuntu Forums). However, one of the crucial problems is the assumption, that the frame rate of the two incoming videos is equal - in reality, usually the desktop capture is of a lower framerate; and even worse, very often frames are out of sync.

嗯,在我目前使用的 Ubuntu Lucid 下,这个过程有很多麻烦(你可以在10.04 中阅读我的一些考验:使用 Theora ogv 进行视频叠加/复合编辑 - Ubuntu 论坛)。然而,关键问题之一是假设两个传入视频的帧速率相等——实际上,通常桌面捕获的帧速率较低;更糟糕的是,很多时候帧不同步

This, then, requires the hassle of sitting in front of a video editor, and manually cutting and editing less-than-a-second clips on frame level - requiring hoursof work for what will be in the end a 5 minutevideo. On the other hand, if the two videos ('live' and 'capture') didhave the same framerate and sync: in principle, you wouldn't need more than a couple of minutes for finding the start sync point in a video editor - and the rest of the 'merged' video processing could be handled by a single command line. Which is why, in this post, I would like to focus on the desktop capturepart.

因此,这需要坐在视频编辑器面前,手动剪切和编辑不到一秒的帧级剪辑——最终需要花费数小时的工作时间来制作 5分钟的视频。另一方面,如果两个视频('live' 和 'capture')确实具有相同的帧率和同步:原则上,在视频编辑器中找到开始同步点不需要超过几分钟- 其余的“合并”视频处理可以由单个命令行处理。这就是为什么在这篇文章中,我想专注于桌面捕获部分。

As far as I can see, there are only few viable(as opposed to 5 Ways to Screencast Your Linux Desktop) alternatives for desktop capture in Linux / Ubuntu (note, I typically use a laptop as target for desktop capturing):

据我所知,在 Linux / Ubuntu中,只有少数可行的桌面捕获替代方案(而不是5 种截屏您的 Linux 桌面的方法)(注意,我通常使用笔记本电脑作为桌面捕获的目标):

  1. Have your target PC (laptop) clone the desktop on its VGA output; use a VGA-to-composite or VGA-to-S-video hardware to obtain a video signal from VGA; use video capture card on a different PC to grab video
  2. Use recordMyDesktopon the target PC
  3. Set up a VNC server(vinoon Ubuntu; or vncserver) on the target PC to be captured; use VNC capture software (such as vncrec) on a different PC to grab/record the VNC stream (which can, subsequently, be converted to video).
  4. Use ffmpegwith x11graboption
  5. *(use some tool on the target PC, that would do a DMAtransfer of a desktop image frame directly - from the graphics card frame buffer memory, to the network adapter memory)
  1. 让您的目标 PC(笔记本电脑)在其 VGA 输出上克隆桌面;使用VGA-to-composite或VGA-to-S-video硬件从VGA获取视频信号;在不同的 PC 上使用视频采集卡来抓取视频
  2. 在目标 PC 上使用recordMyDesktop
  3. 在要捕获的目标 PC 上设置VNC 服务器(在 Ubuntu上为vi no;或vncserver);在不同的 PC 上使用 VNC 捕获软件(例如vncrec)来抓取/记录 VNC 流(随后可以转换为视频)。
  4. ffmpegx11grab选项一起使用
  5. *(在目标 PC 上使用一些工具,可以直接对桌面图像帧进行DMA传输 - 从显卡帧缓冲内存到网络适配器内存

Please note that the usefulness of the above approaches are limited by my context of use: the target PC that I want to capture, typically runs software (utilizing the tested hardware) that moves around massive ammounts of data; best you could say about describing such a system is "barely stable" :) I'd guess this is similar to problems gamers face, when wanting to obtain a video capture of a demanding game. And as soon as I start using something like recordMyDesktop, which also uses quite a bit of resources and wants to capture on the local hard disk - I immediately get severe kernel crashes (often with no vmcore generated).

请注意,上述方法的用处受我的使用环境的限制:我想要捕获的目标 PC 通常运行在大量数据中移动的软件(利用测试的硬件);关于描述这样一个系统,你能说的最好是“几乎没有稳定”:) 我猜这类似于游戏玩家在想要获得要求苛刻的游戏的视频捕获时面临的问题。一旦我开始使用类似的东西recordMyDesktop,它也使用了相当多的资源并想要在本地硬盘上捕获 - 我立即遇到严重的内核崩溃(通常没有生成 vmcore)。

So, in my context, I typically do assume involvement of a secondcomputer - to run the capture and recording of the 'target' PC desktop. Other than that, the pros and cons I can see so far with the above options, are included below.

因此,在我的上下文中,我通常确实假设有第二台计算机参与- 运行“目标”PC 桌面的捕获和记录。除此之外,我目前看到的上述选项的利弊如下。

(Desktop preparation)

(桌面准备)

For all of the methods discussed below, I tend to "prepare" the desktop beforehand:

对于下面讨论的所有方法,我倾向于事先“准备”桌面:

  • Remove desktop backgrounds and icons
  • Set the resolution down to 800x600 via System/Preferences/Monitors (gnome-desktop-properties)
  • Change color depthdown to 16 bpp (using xdpyinfo | grep "of root"to check)
  • 删除桌面背景和图标
  • 通过系统/首选项/监视器 ( gnome-desktop-properties)将分辨率设置为 800x600
  • 将颜色深度更改为 16 bpp(xdpyinfo | grep "of root"用于检查)

... in order to minimize the load on desktop capture software. Note that changing color depth on Ubuntu requires changes to xorg.conf; however, "No xorg.conf (is) found in /etc/X11 (Ubuntu 10.04)" - so you may need to run sudo Xorg -configurefirst.

...为了尽量减少桌面捕获软件的负载。请注意,在 Ubuntu 上更改颜色深度需要更改 xorg.conf;但是,“在 /etc/X11 (Ubuntu 10.04) 中找不到 xorg.conf” - 所以你可能需要先运行sudo Xorg -configure

In order to keep graphics resource use low, also I usually had compizdisabled - or rather, I'd have 'System/Preferences/Appearance/Visual Effects' set to "None". However, after I tried enabling compizby setting 'Visual Effects' to "Normal" (which doesn't get saved), I can notice windows on the LCD screen are redrawn much faster; so I keep it like this, also for desktop capture. I find this a bit strange: how could moreeffects cause a fasterscreen refresh? It doesn't look like it's due to a proprietary driver (the card is "Intel Corporation N10 Family Integrated Graphics Controller", and no proprietary driver option is given by Ubuntu upon switch to compiz) - although, it could be that all the blurring and effects just cheat my eyes :) ).

为了保持较低的图形资源使用率,我通常也compiz禁用了 - 或者更确切地说,我将“系统/首选项/外观/视觉效果”设置为“无”。然而,当我试图使compiz被设置为“Normal”(“视觉效果”不会得到保存),我可以看到在液晶屏上的窗口重绘更快; 所以我保持这样,也用于桌面捕获。我觉得这有点奇怪:更多的效果如何导致更快的屏幕刷新?它看起来不像是由于专有驱动程序(该卡是“ Intel Corporation N10 Family Integrated Graphics Controller”,并且切换到 Ubuntu 时没有提供专有驱动程序选项compiz) - 不过,可能是所有的模糊和效果都欺骗了我的眼睛:))。

Cloning VGA

克隆VGA

Well, this is the most expencive option (as it requires additional purchase of not just one, but two pieces of hardware: VGA converter, and video capture card); and applicable mostly to laptops (which have both a screen + additional VGA output - for desktops one may also have to invest in an additional graphics card, or a VGA cloning hardware).

嗯,这是最昂贵的选择(因为它需要额外购买不仅仅是一个,而是两个硬件:VGA 转换器和视频采集卡);并且主要适用于笔记本电脑(具有屏幕 + 额外的 VGA 输出 - 对于台式机,可能还需要投资购买额外的显卡或 VGA 克隆硬件)。

However, it is also the only option that requires no additional software of the target PC whatsoever (and thus uses 0% processing power of the target CPU) - AND also the only one that will give a video with a true, unjittered framerate of 30 fps (as it is performed by separate hardware - although, with the assumption that clock domains misalignment, present between individual hardware pieces, is negligible).

然而,它也是唯一一个不需要目标 PC 的额外软件的选项(因此使用目标 CPU 的 0% 处理能力) - 并且也是唯一一个可以提供具有真实、无抖动的 30 帧速率的视频的选项fps(因为它是由单独的硬件执行的 - 尽管假设单个硬件之间存在的时钟域未对齐可以忽略不计)。

Actually, as I already own something like a capture card, I have already invested in a VGA converter - in expectation that it will eventually allow me to produce final "merged" videos with only 5 mins of looking for alignment point, and a single command line; but I am yet to see whether this process will work as intended. I'm also wandering how possible it will be to capture desktop as uncompressed video @ 800x600, 30 fps.

实际上,因为我已经拥有像采集卡这样的东西,所以我已经投资了一个 VGA 转换器 - 希望它最终能让我制作最终的“合并”视频,只需 5 分钟的寻找对齐点和一个命令线; 但我还没有看到这个过程是否会按预期进行。我也在想将桌面捕获为 800x600、30 fps 的未压缩视频的可能性有多大。

recordMyDesktop

记录我的桌面

Well, if you run recordMyDesktopwithout any arguments - it starts first with capturing (what looks like) raw image data, in a folder like /tmp/rMD-session-7247; and after you press Ctrl-C to interrupt it, it will encode this raw image data into an .ogv. Obviously, grabbing large image data on the same hard disk as my test software (which also moves large ammounts of data), is usually a cause for an instacrash :)

好吧,如果你recordMyDesktop不带任何参数运行- 它首先从捕获(看起来像)原始图像数据开始,在一个文件夹中/tmp/rMD-session-7247;在您按 Ctrl-C 中断它后,它将将此原始图像数据编码为 .ogv。显然,在与我的测试软件相同的硬盘上抓取大图像数据(它也会移动大量数据),通常是导致 instacrash 的原因:)

Hence, what I tried doing is to setup Sambato share a drive on the network; then on the target PC, I'd connect to this drive - and instruct recordMyDesktopto use this network drive (via gvfs) as its temporary files location:

因此,我尝试做的是设置 Samba以在网络上共享驱动器;然后在目标 PC 上,我将连接到此驱动器 - 并指示recordMyDesktop使用此网络驱动器(通过gvfs)作为其临时文件位置:

recordmydesktop --workdir /home/user/.gvfs/test\ on\ 192.168.1.100/capture/ --no-sound --quick-subsampling --fps 30 --overwrite -o capture.ogv 

Note that, while this command will use the network location for temporary files (and thus makes it possible for recordMyDesktopto run in parallel with my software) - as soon as you hit Ctrl-C, it will start encoding and saving capture.ogvdirectly on the local hard drive of the target (though, at that point, I don't really care :) )

请注意,虽然此命令将使用临时文件的网络位置(因此可以recordMyDesktop与我的软件并行运行) - 只要您按下 Ctrl-C,它就会开始编码并capture.ogv直接保存在本地硬盘上目标的驱动力(不过,在这一点上,我并不在乎:))

First of my nags with recordMyDesktopis that you cannot instruct it to keep the temporary files, and avoid encoding them, on end: you can use Ctrl+Alt+p for pause - or you can hit Ctrl-C quickly after the first one, to cause it to crash; which will then leave the temporary files (if you don't hit Ctrl-C quickly enough the second time, the program will "Cleanning up cache..."). You can then run, say:

我的第一个烦恼recordMyDesktop是你不能指示它保留临时文件,并避免对它们进行编码,最后:你可以使用 Ctrl+Alt+p 暂停 - 或者你可以在第一个之后快速点击 Ctrl-C,以导致它崩溃;然后将保留临时文件(如果您第二次没有足够快地按 Ctrl-C,程序将“清理缓存...”)。然后你可以运行,说:

recordmydesktop --rescue /home/user/.gvfs/test\ on\ 192.168.1.100/capture/rMD-session-7247/

... in order to convert the raw temporary data. However, more often than not, recordMyDesktopwill itself segfault in the midst of performing this "rescue". Although, the reason why I want to keep the temp files, is to have the uncompressed source for the picture-in-picture montage. Note that the "--on-the-fly-encoding" will avoid using temp files altogether - at the expence of using more CPU processing power (which, for me, again is cause for crashes.)

...为了转换原始临时数据。然而,通常情况下,recordMyDesktop在执行这种“救援”的过程中,它本身会出现段错误。虽然,我想保留临时文件的原因是拥有画中画蒙太奇的未压缩源。请注意,“ --on-the-fly-encoding”将完全避免使用临时文件 - 代价是使用更多的 CPU 处理能力(对我来说,这又是导致崩溃的原因。)

Then, there is the framerate - obviously, you can set requestedframerate using the '--fps N' option; however, that is no guarantee that you will actually obtain that framerate; for instance, I'd get:

然后是帧率——显然,您可以使用 ' ' 选项设置请求的帧率--fps N;但是,这并不能保证您确实会获得该帧率;例如,我会得到:

recordmydesktop --fps 25
...
Saved 2983 frames in a total of 6023 requests
...

... for a capture with my test software running; which means that the actually achievedrate is more like 25*2983/6032 = 12.3632 fps!

...在我的测试软件运行时进行捕获;这意味着实际达到的速率更像是 25*2983/6032 = 12.3632 fps!

Obviously, frames are dropped - and mostly that shows as video playback is too fast. However, if I lower the requested fps to 12 - then according to saved/total reports, I achieve something like 11 fps; and in this case, video playback doesn't look 'sped up'. And I still haven't tried aligning such a capture with a live video - so I have no idea if those frames that actually have been saved, also have an accurate timestamp.

显然,丢帧 - 主要是因为视频播放速度太快。但是,如果我将请求的 fps 降低到 12 - 然后根据保存/总报告,我会达到 11 fps;在这种情况下,视频播放看起来没有“加速”。而且我还没有尝试将这样的捕获与实时视频对齐 - 所以我不知道那些实际保存的帧是否也有准确的时间戳。

VNC capture

VNC 捕获

The VNC capture, for me, consists of running a VNC server on the 'target' PC, and running vncrec(twibright edition) on the 'recorder' PC. As VNC server, I use vino, which is "System/Preferences/Remote Desktop (Preferences)". And apparently, even if vino configurationmay not be the easiest thing to manage, vinoas a server seems not too taxing to the 'target' PC; as I haven't experienced crashes when it runs in parallel with my test software.

对我来说,VNC 捕获包括在“目标”PC 上运行 VNC 服务器,并vncrec在“记录器”PC 上运行(twibright 版)。作为 VNC 服务器,我使用vino,即“系统/首选项/远程桌面(首选项)”。显然,即使vino 配置可能不是最容易管理的事情,vino因为服务器似乎对“目标”PC 来说不是太繁重;因为当它与我的测试软件并行运行时,我没有遇到崩溃。

On the other hand, when vncrecis capturing on the 'recorder' PC, it also raises a window showing you the 'target' desktop as it is seen in 'realtime'; when there are large updates (i.e. whole windows moving) on the 'target' - one can, quite visibly, see problems with the update/refresh rate on the 'recorder'. But, for only small updates (i.e. just a cursor moving on a static background), things seem OK.

另一方面,当vncrec在“记录器”PC 上捕获时,它还会弹出一个窗口,向您显示“目标”桌面,就像在“实时”中看到的一样;当“目标”上有大量更新(即整个窗口移动)时 - 人们可以非常明显地看到“记录器”上的更新/刷新率问题。但是,对于小更新(即只是在静态背景上移动的光标),事情似乎没问题。

This makes me wonder about one of my primary questions with this post - what is it, that sets the framerate in a VNC connection?

这让我想知道我在这篇文章中的主要问题之一 - 在 VNC 连接中设置帧率的是什么?

I haven't found a clear answer to this, but from bits and pieces of info (see refs below), I gather that:

我还没有找到明确的答案,但从零散的信息(见下面的参考文献)中,我认为:

  • The VNC server simply sends changes (screen changes + clicks etc) as fast as it can, when it receives them ; limited by the max network bandwidth that is available to the server
  • The VNC client receives those change events delayed and jittered by the network connection, and attempts to reconstruct the desktop "video" stream, again as fast as it can
  • VNC 服务器在收到更改(屏幕更改 + 单击等)时,会尽快发送更改;受服务器可用的最大网络带宽限制
  • VNC 客户端接收那些被网络连接延迟和抖动的更改事件,并尝试重建桌面“视频”流,再次尽可能快

... which means, one cannot state anything in terms of a stable, periodic frame rate (as in video).

...这意味着,不能用稳定的、周期性的帧速率(如视频)来说明任何事情。

As far as vncrecas a client goes, the end videos I get usually are declared as 10 fps, although frames can be rather displaced/jittered (which then requires the cutting in video editors). Note that the vncrec-twibright/READMEstates: "The sample rate of the movie is 10 by default or overriden by VNCREC_MOVIE_FRAMERATE environment variable, or 10 if not specified."; however, the manpage also states "VNCREC_MOVIE_FRAMERATE - Specifies frame rate of the output movie. Has an effect only in -movie mode. Defaults to 10. Try 24 when your transcoder vomits from 10.". And if one looks into "vncrec/sockets.c" source, one can see:

vncrec客户而言,我得到的最终视频通常被声明为 10 fps,尽管帧可能会发生位移/抖动(这需要在视频编辑器中进行剪切)。请注意,vncrec-twibright/README声明:“电影的采样率默认为 10 或被 VNCREC_MOVIE_FRAMERATE 环境变量覆盖,如果未指定,则为 10。”; 但是,联机帮助页还指出“ VNCREC_MOVIE_FRAMERATE - 指定输出电影的帧速率。仅在 -movie 模式下有效。默认为 10。当您的转码器从 10 开始呕吐时,请尝试 2​​4。”。如果查看“ vncrec/sockets.c”来源,可以看到:

void print_movie_frames_up_to_time(struct timeval tv)
{
  static double framerate;
  ....
  memcpy(out, bufoutptr, buffered);
  if (appData.record)
    {
      writeLogHeader (); /* Writes the timestamp */
      fwrite (bufoutptr, 1, buffered, vncLog);
    }

... which shows that some timestamps are written - but whether those timestamps originate from the "original" 'target' PC, or the 'recorder' one, I cannot tell. EDIT: thanks to the answer by @kanaka, I checked through vncrec/sockets.cagain, and can see that it is the writeLogHeaderfunction itself calling gettimeofday; so the timestamps it writes are local - that is, they originate from the 'recorder' PC (and hence, these timestamps do not accurately describe when the frames originated on the 'target' PC).

...这表明写入了一些时间戳 - 但这些时间戳是来自“原始”“目标”PC还是“记录器”PC,我无法确定。 编辑:感谢@kanaka 的回答,我再次检查了vncrec/sockets.c,可以看到它是writeLogHeader函数本身调用gettimeofday;所以它写入的时间戳是本地的——也就是说,它们来自“记录器”PC(因此,这些时间戳不能准确描述帧何时起源于“目标”PC)。

In any case, it still seems to me, that the server sends - and vncrecas client receives - whenever; and it is only in the process of encoding a video file from the raw capture afterwards, that some form of a frame rate is set/interpolated.

无论如何,在我看来,服务器发送 - 并且vncrec作为客户端接收 -无论何时;并且只有在之后从原始捕获编码视频文件的过程中,才会设置/插入某种形式的帧速率。

I'd also like to state that on my 'target' laptop, the wired network connection is broken; so the wireless is my only option to get access to the router and the local network - at far lower speed than the 100MB/s that the router could handle from wired connections. However, if the jitter in captured frames is caused by wrong timestamps due to load on the 'target' PC, I don't think good network bandwidth will help too much.

我还想说明,在我的“目标”笔记本电脑上,有线网络连接已断开;所以无线是我访问路由器和本地网络的唯一选择 - 速度远低于路由器可以通过有线连接处理的 100MB/s。但是,如果捕获帧中的抖动是由于“目标”PC 上的负载导致的错误时间戳引起的,我认为良好的网络带宽不会有太大帮助。

Finally, as far as VNC goes, there could be other alternatives to try - such as VNCastserver (promising, but requires some time to build from source, and is in "early experimental version"); or MultiVNC(although, it just seems like a client/viewer, without options for recording).

最后,就 VNC 而言,可能还有其他替代方案可以尝试 - 例如VNCast服务器(很有希望,但需要一些时间从源代码构建,并且处于“早期实验版本”);或MultiVNC虽然,它看起来像一个客户端/查看器,没有录制选项)。

ffmpeg with x11grab

ffmpeg 与 x11grab

Haven't played with this much, but, I've tried it in connection with netcat; this:

没有玩过这么多,但是,我已经尝试过与netcat; 这个:

# 'target'
ffmpeg -f x11grab -b 8000k -r 30 -s 800x600 -i :0.0 -f rawvideo - | nc 192.168.1.100 5678
# 'recorder'
nc -l 0.0.0.0 5678 > raw.video  #

... does capture a file, but ffplaycannot read the captured file properly; while:

... 确实捕获了文件,但ffplay无法正确读取捕获的文件;尽管:

# 'target'
ffmpeg -f x11grab -b 500k -r 30 -s 800x600 -i :0.0 -f yuv4mpegpipe -pix_fmt yuv444p - | nc 192.168.1.100 5678
# 'recorder'
nc -l 0.0.0.0 5678 | ffmpeg -i - /path/to/samplimg%03d.png

does produce .png images - but with compression artifacts (result of the compression involved with yuv4mpegpipe, I guess).

确实产生 .png 图像 - 但有压缩伪影(yuv4mpegpipe我猜是压缩的结果)。

Thus, I'm not liking ffmpeg+x11grabtoo much currently - but maybe I simply don't know how to set it up for my needs.

因此,我目前不喜欢ffmpeg+x11grab太多 - 但也许我根本不知道如何根据我的需要进行设置。

*( graphics card -> DMA -> network )

*(显卡 -> DMA -> 网络)

I am, admittedly, not sure something like this exists - in fact, I would wager it doesn't :) And I'm no expert here, but I speculate:

诚然,我不确定这样的事情是否存在 - 事实上,我敢打赌它不会:) 而且我不是这里的专家,但我推测:

if DMA memory transfer can be initiated from the graphics card (or its buffer that keeps the current desktop bitmap) as source, and the network adapter as destination- then in principle, it should be possible to obtain an uncompressed desktop capture with a correct (and decent) framerate. The point in using DMA transfer would be, of course, to relieve the processor from the task of copying the desktop image to the network interface (and thus, reduce the influence the capturing software can have on the processes running on the 'target' PC - especially those dealing with RAM or hard-disk).

如果可以从作为的图形卡(或其保留当前桌面位图的缓冲区)和作为目标的网络适配器启动 DMA 内存传输- 那么原则上,应该可以使用正确的(和体面的)帧率。当然,使用 DMA 传输的目的是将处理器从将桌面映像复制到网络接口的任务中解脱出来(从而减少捕获软件对“目标”PC 上运行的进程的影响- 特别是那些处理 RAM 或硬盘的)。

A suggestion like this, of course, assumes that: there are massive ammounts of network bandwidth (for 800x600, 30 fps at least 800*600*3*30 = 43200000 bps = 42 MiB/s, which should be OK for local 100 MB/s networks); plenty of hard disk on the other PC that does the 'recording' - and finally, software that can afterwards read that raw data, and generate image sequences or videos based on it :)

当然,这样的建议假设:有大量的网络带宽(对于 800x600,30 fps 至少 800*600*3*30 = 43200000 bps = 42 MiB/s,这对于本地 100 MB 应该没问题/s 网络); 另一台 PC 上的大量硬盘用于“录制” - 最后,软件可以随后读取原始数据,并基于它生成图像序列或视频:)

The bandwidth and hard disk demands I could live with - as long as there is guarantee both for a stable framerate and uncompressed data; which is why I'd love to hear if something like this already exists.

我可以承受的带宽和硬盘需求——只要能保证稳定的帧率和未压缩的数据;这就是为什么我很想知道是否已经存在这样的东西。

-- -- -- -- -- 

-- -- -- -- -- 

Well, I guess that was it - as brief as I could put it :) Any suggestions for tools - or process(es), that can result with a desktop capture

好吧,我想就是这样 - 尽可能简短:) 对工具或过程的任何建议,可能会导致桌面捕获

  • in uncompressed format (ultimately convertible to uncompressed/lossless PNG image sequence), and
  • with a "correctly timecoded", stable framerate
  • 未压缩格式(最终可转换为未压缩/无损 PNG 图像序列),以及
  • 具有“正确的时间编码”,稳定的帧率

..., that will ultimately lend itself to 'easy', single command-line processing for generating 'picture-in-picture' overlay videos - will be greatly appreciated!

...,这最终将有助于生成“画中画”叠加视频的“简单”单命令行处理 - 将不胜感激!

Thanks in advance for any comments,
Cheers!

提前感谢您的任何评论,
干杯!



References

参考

  1. Experiences Producing a Screencast on Linux for CryptoTE - idlebox.net
  2. The VideoLAN Forums ? View topic - VNC Client input support (like screen://)
  3. VNCServer throttles user inpt for slow client - Kyprianou, Mark - com.realvnc.vnc-list - MarkMail
  4. Linux FAQ - X Windows: How do I Display and Control a Remote Desktop using VNC
  5. How much bandwidth does VNC require? RealVNC - Frequently asked questions
  6. x11vnc: a VNC server for real X displays
  7. HowtoRecordVNC (an X11 session) - Debian Wiki
  8. Alternative To gtk-RecordMyDesktop in Ubuntu
  9. (Ffmpeg-user) How do I use pipes in ffmpeg
  10. (ffmpeg-devel) (PATCH) Fix segfault in x11grab when drawing Cursor on Xservers that don't support the XFixes extension
  1. 在 Linux 上为 CryptoTE 制作截屏视频的经验 - idlebox.net
  2. VideoLAN 论坛?查看主题 - VNC 客户端输入支持(如 screen://)
  3. VNCServer 限制慢速客户端的用户输入 - Kyprianou, Mark - com.realvnc.vnc-list - MarkMail
  4. Linux 常见问题 - X Windows:如何使用 VNC 显示和控制远程桌面
  5. VNC 需要多少带宽?RealVNC - 常见问题
  6. x11vnc:用于真实 X 显示的 VNC 服务器
  7. HowtoRecordVNC(X11 会话) - Debian Wiki
  8. 在 Ubuntu 中替代 gtk-RecordMyDesktop
  9. (ffmpeg-user) 如何在 ffmpeg 中使用管道
  10. (ffmpeg-devel) (PATCH) 在不支持 XFixes 扩展的 Xserver 上绘制 Cursor 时修复 x11grab 中的段错误

采纳答案by kanaka

You should get a badge for such a long well though out question. ;-)

这么长的问题,你应该得到一个徽章。;-)

In answer to your primary question, VNC uses the RFB protocol which is a remote frame buffer protocol (thus the acronym) not a streaming video protocol. The VNC client sends a FrameBufferUpdateRequest message to the server which contains a viewport region that the client is interested in and an incremental flag. If the incremental flag is not set then the server will respond with a FrameBufferUpdate message that contains the content of the region requested. If the incremental flag is set then the server may respond with a FrameBufferUpdate message that contains whatever parts of the region requested that have changed since the last time the client was sent that region.

在回答您的主要问题时,VNC 使用 RFB 协议,它是一种远程帧缓冲协议(因此称为首字母缩写词),而不是流式视频协议。VNC 客户端向服务器发送 FrameBufferUpdateRequest 消息,其中包含客户端感兴趣的视口区域和增量标志。如果未设置增量标志,则服务器将使用包含请求区域内容的 FrameBufferUpdate 消息进行响应。如果设置了增量标志,则服务器可以使用 FrameBufferUpdate 消息进行响应,该消息包含请求的区域自上次发送该区域以来已更改的任何部分。

The definition of how requests and updates interact is not crisply defined. The server won't necessarily respond to every request with an update if nothing has changed. If the server has multiple requests queued from the client it is also allowed to send a single update in response. In addition, the client really needs to be able to respond to an asynchronous update message from the server (not in response to a request) otherwise the client will fall out of sync (because RFB is not a framed protocol).

请求和更新如何交互的定义没有明确定义。如果没有任何变化,服务器不一定会通过更新来响应每个请求。如果服务器有来自客户端的多个请求排队,它也被允许发送一个更新作为响应。此外,客户端确实需要能够响应来自服务器的异步更新消息(而不是响应请求),否则客户端将失去同步(因为 RFB 不是框架协议)。

Often clients are simply implemented to send incremental update requests for the entire frame buffer viewport at a periodic interval and handle any server update messages as they arrive (i.e. no attempt is made to tie requests and updates together).

通常,客户端只是简单地实现以周期性间隔发送整个帧缓冲区视口的增量更新请求,并在它们到达时处理任何服务器更新消息(即,不尝试将请求和更新绑定在一起)。

Hereis a description of FrameBufferUpdateRequest messages.

这里是 FrameBufferUpdateRequest 消息的描述。