从python生成电影而不将单个帧保存到文件

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/4092927/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 14:12:12  来源:igfitidea点击:

Generating movie from python without saving individual frames to files

pythonnumpyffmpegmatplotlibx264

提问by Paul

I would like to create an h264 or divx movie from frames that I generate in a python script in matplotlib. There are about 100k frames in this movie.

我想从我在 matplotlib 的 python 脚本中生成的帧创建 h264 或 divx 电影。这部电影大约有 10 万帧。

In examples on the web [eg. 1], I have only seen the method of saving each frame as a png and then running mencoder or ffmpeg on these files. In my case, saving each frame is impractical. Is there a way to take a plot generated from matplotlib and pipe it directly to ffmpeg, generating no intermediate files?

在网络上的例子中[例如。1],我只看到了将每一帧保存为png然后在这些文件上运行mencoder或ffmpeg的方法。就我而言,保存每一帧是不切实际的。有没有办法获取从 matplotlib 生成的图并将其直接通过管道传输到 ffmpeg,而不生成中间文件?

Programming with ffmpeg's C-api is too difficult for me [eg. 2]. Also, I need an encoding that has good compression such as x264 as the movie file will otherwise be too large for a subsequent step. So it would be great to stick with mencoder/ffmpeg/x264.

使用 ffmpeg 的 C-api 编程对我来说太难了 [例如。2]。此外,我需要一个具有良好压缩的编码,例如 x264,否则电影文件对于后续步骤来说太大了。所以坚持使用 mencoder/ffmpeg/x264 会很棒。

Is there something that can be done with pipes [3]?

有什么可以用管道做的事情 [3] 吗?

[1] http://matplotlib.sourceforge.net/examples/animation/movie_demo.html

[1] http://matplotlib.sourceforge.net/examples/animation/movie_demo.html

[2] How does one encode a series of images into H264 using the x264 C API?

[2]如何使用 x264 C API 将一系列图像编码为 H264?

[3] http://www.ffmpeg.org/ffmpeg-doc.html#SEC41

[3] http://www.ffmpeg.org/ffmpeg-doc.html#SEC41

采纳答案by tacaswell

This functionality is now (at least as of 1.2.0, maybe 1.1) baked into matplotlib via the MovieWriterclass and it's sub-classes in the animationmodule. You also need to install ffmpegin advance.

这个功能现在(至少从 1.2.0 开始,也许是 1.1)通过MovieWriter类和animation模块中的子类烘焙到 matplotlib中。您还需要ffmpeg提前安装。

import matplotlib.animation as animation
import numpy as np
from pylab import *


dpi = 100

def ani_frame():
    fig = plt.figure()
    ax = fig.add_subplot(111)
    ax.set_aspect('equal')
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    im = ax.imshow(rand(300,300),cmap='gray',interpolation='nearest')
    im.set_clim([0,1])
    fig.set_size_inches([5,5])


    tight_layout()


    def update_img(n):
        tmp = rand(300,300)
        im.set_data(tmp)
        return im

    #legend(loc=0)
    ani = animation.FuncAnimation(fig,update_img,300,interval=30)
    writer = animation.writers['ffmpeg'](fps=30)

    ani.save('demo.mp4',writer=writer,dpi=dpi)
    return ani

Documentation for animation

文档 animation

回答by Paul

After patching ffmpeg (see Joe Kington comments to my question), I was able to get piping png's to ffmpeg as follows:

修补 ffmpeg 后(请参阅 Joe Kington 对我的问题的评论),我能够按如下方式将 png 管道传输到 ffmpeg:

import subprocess
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt

outf = 'test.avi'
rate = 1

cmdstring = ('local/bin/ffmpeg',
             '-r', '%d' % rate,
             '-f','image2pipe',
             '-vcodec', 'png',
             '-i', 'pipe:', outf
             )
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

plt.figure()
frames = 10
for i in range(frames):
    plt.imshow(np.random.randn(100,100))
    plt.savefig(p.stdin, format='png')

It would not work without the patch, which trivially modifies two files and adds libavcodec/png_parser.c. I had to manually apply the patch to libavcodec/Makefile. Lastly, I removed '-number' from Makefileto get the man pages to build. With compile options,

如果没有补丁,它就无法工作,它会简单地修改两个文件并添加libavcodec/png_parser.c. 我不得不手动将补丁应用到libavcodec/Makefile. 最后,我删除了“-number”Makefile以获取要构建的手册页。使用编译选项,

FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers
  built on Nov 30 2010 20:42:02 with gcc 4.2.1 (Apple Inc. build 5664)
  configuration: --prefix=/Users/paul/local_test --enable-gpl --enable-postproc --enable-swscale --enable-libxvid --enable-libx264 --enable-nonfree --mandir=/Users/paul/local_test/share/man --enable-shared --enable-pthreads --disable-indevs --cc=/usr/bin/gcc-4.2 --arch=x86_64 --extra-cflags=-I/opt/local/include --extra-ldflags=-L/opt/local/lib
  libavutil     50.15. 1 / 50.15. 1
  libavcodec    52.72. 2 / 52.72. 2
  libavformat   52.64. 2 / 52.64. 2
  libavdevice   52. 2. 0 / 52. 2. 0
  libswscale     0.11. 0 /  0.11. 0
  libpostproc   51. 2. 0 / 51. 2. 0

回答by otterb

This is great! I wanted to do the same. But, I could never compile the patched ffmpeg source (0.6.1) in Vista with MingW32+MSYS+pr enviroment... png_parser.c produced Error1 during compilation.

这很棒!我也想做同样的事情。但是,我永远无法在 Vista 中使用 MingW32+MSYS+pr 环境编译打补丁的 ffmpeg 源代码(0.6.1)... png_parser.c 在编译过程中产生了 Error1。

So, I came up with a jpeg solution to this using PIL. Just put your ffmpeg.exe in the same folder as this script. This should work with ffmpeg without the patch under Windows. I had to use stdin.write method rather than the communicate method which is recommended in the official documentation about subprocess. Note that the 2nd -vcodec option specifies the encoding codec. The pipe is closed by p.stdin.close().

所以,我想出了一个使用 PIL 的 jpeg 解决方案。只需将您的 ffmpeg.exe 放在与此脚本相同的文件夹中。这应该适用于没有 Windows 补丁的 ffmpeg。我不得不使用 stdin.write 方法而不是关于子进程的官方文档中推荐的通信方法。请注意,第二个 -vcodec 选项指定编码编解码器。管道由 p.stdin.close() 关闭。

import subprocess
import numpy as np
from PIL import Image

rate = 1
outf = 'test.avi'

cmdstring = ('ffmpeg.exe',
             '-y',
             '-r', '%d' % rate,
             '-f','image2pipe',
             '-vcodec', 'mjpeg',
             '-i', 'pipe:', 
             '-vcodec', 'libxvid',
             outf
             )
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE, shell=False)

for i in range(10):
    im = Image.fromarray(np.uint8(np.random.randn(100,100)))
    p.stdin.write(im.tostring('jpeg','L'))
    #p.communicate(im.tostring('jpeg','L'))

p.stdin.close()

回答by user621442

Converting to image formats is quite slow and adds dependencies. After looking at these page and other I got it working using raw uncoded buffers using mencoder (ffmpeg solution still wanted).

转换为图像格式非常慢,并且会增加依赖项。查看这些页面和其他页面后,我使用mencoder(仍然需要ffmpeg解决方案)使用原始未编码缓冲区使其工作。

Details at: http://vokicodder.blogspot.com/2011/02/numpy-arrays-to-video.html

详情见:http: //vokicodder.blogspot.com/2011/02/numpy-arrays-to-video.html

import subprocess

import numpy as np

class VideoSink(object) :

    def __init__( self, size, filename="output", rate=10, byteorder="bgra" ) :
            self.size = size
            cmdstring  = ('mencoder',
                    '/dev/stdin',
                    '-demuxer', 'rawvideo',
                    '-rawvideo', 'w=%i:h=%i'%size[::-1]+":fps=%i:format=%s"%(rate,byteorder),
                    '-o', filename+'.avi',
                    '-ovc', 'lavc',
                    )
            self.p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE, shell=False)

    def run(self, image) :
            assert image.shape == self.size
            self.p.stdin.write(image.tostring())
    def close(self) :
            self.p.stdin.close()

I got some nice speedups.

我得到了一些不错的加速。

回答by cxrodgers

These are all really great answers. Here's another suggestion. @user621442 is correct that the bottleneck is typically the writing of the image, so if you are writing png files to your video compressor, it will be pretty slow (even if you are sending them through a pipe instead of writing to disk). I found a solution using pure ffmpeg, which I personally find easier to use than matplotlib.animation or mencoder.

这些都是非常好的答案。这是另一个建议。@user621442 是正确的,瓶颈通常是图像的写入,因此如果您将 png 文件写入视频压缩器,它会非常慢(即使您通过管道发送它们而不是写入磁盘)。我找到了一个使用纯 ffmpeg 的解决方案,我个人觉得它比 matplotlib.animation 或 mencoder 更容易使用。

Also, in my case, I wanted to just save the image in an axis, instead of saving all of the tick labels, figure title, figure background, etc. Basically I wanted to make a movie/animation using matplotlib code, but not have it "look like a graph". I've included that codehere, but you can make standard graphs and pipe them to ffmpeg instead if you want.

另外,就我而言,我只想将图像保存在轴中,而不是保存所有刻度标签、图形标题、图形背景等。基本上我想使用 matplotlib 代码制作电影/动画,但没有它“看起来像一个图表”。我在此处包含了该代码,但是如果需要,您可以制作标准图形并将它们通过管道传输到 ffmpeg。

import matplotlib.pyplot as plt
import subprocess

# create a figure window that is the exact size of the image
# 400x500 pixels in my case
# don't draw any axis stuff ... thanks to @Joe Kington for this trick
# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
f = plt.figure(frameon=False, figsize=(4, 5), dpi=100)
canvas_width, canvas_height = f.canvas.get_width_height()
ax = f.add_axes([0, 0, 1, 1])
ax.axis('off')

def update(frame):
    # your matplotlib code goes here

# Open an ffmpeg process
outf = 'ffmpeg.mp4'
cmdstring = ('ffmpeg', 
    '-y', '-r', '30', # overwrite, 30fps
    '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
    '-pix_fmt', 'argb', # format
    '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
    '-vcodec', 'mpeg4', outf) # output encoding
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

# Draw 1000 frames and write to the pipe
for frame in range(1000):
    # draw the frame
    update(frame)
    plt.draw()

    # extract the image as an ARGB string
    string = f.canvas.tostring_argb()

    # write to pipe
    p.stdin.write(string)

# Finish up
p.communicate()

回答by ch271828n

Here is a modified version of @tacaswell 's answer. Modified the following:

这是@tacaswell 答案的修改版本。修改了以下内容:

  1. Do not require the pylabdependency
  2. Fix several places s.t. this function is directly runnable. (The original one cannot be copy-and-paste-and-run directly and have to fix several places.)
  1. 不需要pylab依赖
  2. 修复了这个函数可以直接运行的几个地方。(原来的不能直接复制粘贴运行,需要修复几个地方。)

Thanks so much for @tacaswell 's wonderful answer!!!

非常感谢@tacaswell 的精彩回答!!!

def ani_frame():
    def gen_frame():
        return np.random.rand(300, 300)

    fig = plt.figure()
    ax = fig.add_subplot(111)
    ax.set_aspect('equal')
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    im = ax.imshow(gen_frame(), cmap='gray', interpolation='nearest')
    im.set_clim([0, 1])
    fig.set_size_inches([5, 5])

    plt.tight_layout()

    def update_img(n):
        tmp = gen_frame()
        im.set_data(tmp)
        return im

    # legend(loc=0)
    ani = animation.FuncAnimation(fig, update_img, 300, interval=30)
    writer = animation.writers['ffmpeg'](fps=30)

    ani.save('demo.mp4', writer=writer, dpi=72)
    return ani