C++ 由于捕获缓冲区,OpenCV VideoCapture 滞后

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/30032063/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-28 13:41:46  来源:igfitidea点击:

OpenCV VideoCapture lag due to the capture buffer

c++opencvvideo

提问by Nyaruko

I am capturing video through a webcam which gives a mjpeg stream. I did the video capture in a worker thread. I start the capture like this:

我正在通过提供 mjpeg 流的网络摄像头捕获视频。我在工作线程中进行了视频捕获。我这样开始捕获:

const std::string videoStreamAddress = "http://192.168.1.173:80/live/0/mjpeg.jpg?x.mjpeg";
qDebug() << "start";
cap.open(videoStreamAddress);
qDebug() << "really started";
cap.set(CV_CAP_PROP_FRAME_WIDTH, 720);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 576);

the camera is feeding the stream at 20fps. But if I did the reading in 20fps like this:

相机以 20fps 的速度馈送流。但是如果我像这样以 20fps 的速度阅读:

if (!cap.isOpened()) return;

        Mat frame;
        cap >> frame; // get a new frame from camera
        mutex.lock();

        m_imageFrame = frame;
        mutex.unlock();

Then there is a 3+ seconds lag. The reason is that the captured video is first stored in a buffer.When I first start the camera, the buffer is accumulated but I did not read the frames out. So If I read from the buffer it always gives me the old frames. The only solutions I have now is to read the buffer at 30fps so it will clean the buffer quickly and there's no more serious lag.

然后有 3+ 秒的滞后。原因是采集到的视频首先存放在一个缓冲区中。我第一次启动相机时,缓冲区是累积的,但我没有把帧读出来。所以如果我从缓冲区读取它总是给我旧的帧。我现在唯一的解决方案是以 30fps 读取缓冲区,这样它就会快速清理缓冲区,并且不会有更严重的延迟。

Is there any other possible solution so that I could clean/flush the buffer manually each time I start the camera?

有没有其他可能的解决方案,以便我每次启动相机时都可以手动清洁/冲洗缓冲区?

回答by Maarten Bamelis

OpenCV Solution

OpenCV 解决方案

According to thissource, you can set the buffersize of a cv::VideoCaptureobject.

根据来源,您可以设置cv::VideoCapture对象的缓冲区大小。

cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); // internal buffer will now store only 3 frames

// rest of your code...

There is an important limitation however:

但是有一个重要的限制:

CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backendcurrently)

CV_CAP_PROP_BUFFERSIZE 存储在内部缓冲存储器中的帧数(注意:目前仅支持 DC1394 v 2.x 后端

Update from comments.In newer versions of OpenCV (3.4+), the limitation seems to be gone and the code uses scoped enumerations:

从评论更新。在较新版本的 OpenCV (3.4+) 中,限制似乎消失了,代码使用了作用域枚举:

cv::VideoCapture cap;
cap.set(cv::CAP_PROP_BUFFERSIZE, 3);


Hackaround 1

解决方法 1

If the solution does not work, take a look at this postthat explains how to hack around the issue.

如果解决方案不起作用,请查看这篇文章,其中解释了如何解决该问题。

In a nutshell: the time needed to query a frame is measured; if it is too low, it means the frame was read from the buffer and can be discarded. Continue querying frames until the time measured exceeds a certain limit. When this happens, the buffer was empty and the returned frame is up to date.

简而言之:测量查询一帧所需的时间;如果它太低,则表示该帧是从缓冲区读取的,可以丢弃。继续查询帧,直到测量的时间超过某个限制。发生这种情况时,缓冲区为空,返回的帧是最新的。

(The answer on the linked post shows: returning a frame from the buffer takes about 1/8th the time of returning an up to date frame. Your mileage may vary, of course!)

(链接帖子上的答案显示:从缓冲区返回帧大约需要返回最新帧的时间的 1/8。当然,您的里程可能会有所不同!)



Hackaround 2

变通方法 2

A different solution, inspired by thispost, is to create a third thread that grabs frames continuously at high speed to keep the buffer empty. This thread should use the cv::VideoCapture.grab()to avoid overhead.

本文启发的另一种解决方案是创建第三个线程,该线程以高速连续抓取帧以保持缓冲区为空。此线程应使用cv::VideoCapture.grab()以避免开销。

You could use a simple spin-lock to synchronize reading frames between the real worker thread and the third thread.

您可以使用一个简单的自旋锁来同步真正的工作线程和第三个线程之间的阅读帧。

回答by Ivan Talalaev

Guys this is pretty stupid and nasty solution, but accepted answer didn't helped me for some reasons. (Code in python but the essence pretty clear)

伙计们,这是非常愚蠢和讨厌的解决方案,但由于某些原因,接受的答案并没有帮助我。(python中的代码但本质很清楚)

# vcap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
data = np.zeros((1140, 2560))
image = plt.imshow(data)

while True:
    vcap = cv2.VideoCapture("rtsp://admin:@192.168.3.231")
    ret, frame = vcap.read()
    image.set_data(frame)
    plt.pause(0.5) # any other consuming operation
    vcap.release()

回答by emu

You can make sure that grabbing the frame took a bit of time. It is quite simple to code, though a bit unreliable; potentially, this code could lead to a deadlock.

您可以确保抓取框架需要一些时间。编码很简单,虽然有点不可靠;潜在地,此代码可能导致死锁。

#include <chrono>
using clock = std::chrono::high_resolution_clock;
using duration_float = std::chrono::duration_cast<std::chrono::duration<float>>;
// ...
while (1) {
    TimePoint time_start = clock::now();
    camera.grab();
    if (duration_float(clock::now() - time_start).count() * camera.get(cv::CAP_PROP_FPS) > 0.5) {
        break;
    }
}
camera.retrieve(dst_image);

The code uses C++11.

代码使用 C++11。

回答by bartolo-otrit

There is an option to drop old buffers if you use a GStreamer pipeline. appsink drop=trueoption"Drops old buffers when the buffer queue is filled". In my particular case, there is a delay (from time to time) during the live stream processing, so it's needed to get the latest frame each VideoCapture.readcall.

如果您使用 GStreamer 管道,则可以选择删除旧缓冲区。appsink drop=true选项“当缓冲区队列已满时丢弃旧缓冲区”。在我的特定情况下,实时流处理过程中会出现延迟(不时),因此需要每次VideoCapture.read调用获取最新帧。

#include <chrono>
#include <thread>

#include <opencv4/opencv2/highgui.hpp>

static constexpr const char * const WINDOW = "1";

void video_test() {
    // It doesn't work properly without `drop=true` option
    cv::VideoCapture video("v4l2src device=/dev/video0 ! videoconvert ! videoscale ! videorate ! video/x-raw,width=640 ! appsink drop=true", cv::CAP_GSTREAMER);

    if(!video.isOpened()) {
        return;
    }

    cv::namedWindow(
        WINDOW,
        cv::WINDOW_GUI_NORMAL | cv::WINDOW_NORMAL | cv::WINDOW_KEEPRATIO
    );
    cv::resizeWindow(WINDOW, 700, 700);

    cv::Mat frame;
    const std::chrono::seconds sec(1);
    while(true) {
        if(!video.read(frame)) {
            break;
        }
        std::this_thread::sleep_for(sec);
        cv::imshow(WINDOW, frame);
        cv::waitKey(1);
    }
}

回答by Rodrigo Morimoto Suguiura

If you know the framerate of your camera you may use this information (i.e. 30 frames per second) to grab the frames until you got a lower frame rate. It works because if grab function become delayed (i.e. get more time to grab a frame than the standard frame rate), it means that you got every frame inside the buffer and opencv need to wait the next frame to come from camera.

如果您知道相机的帧速率,则可以使用此信息(即每秒 30 帧)来抓取帧,直到获得较低的帧速率。它起作用是因为如果抓取功能延迟(即获得比标准帧速率更多的时间来抓取帧),这意味着您将每一帧都放入缓冲区中,而 opencv 需要等待来自相机的下一帧。

while(True):
    prev_time=time.time()
    ref=vid.grab()
    if (time.time()-prev_time)>0.030:#something around 33 FPS
        break
ret,frame = vid.retrieve(ref)