在 Python 中从 RTSP 流中读取帧
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/17961318/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Read Frames from RTSP Stream in Python
提问by fmorstatter
I have recently set up a Raspberry Pi camera and am streaming the frames over RTSP. While it may not be completely necessary, here is the command I am using the broadcast the video:
我最近设置了一个 Raspberry Pi 相机,并通过 RTSP 流式传输帧。虽然它可能不是完全必要的,但这是我使用广播视频的命令:
raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264
This streams the video perfectly.
这可以完美地播放视频。
What I would now like to do is parse this stream with Python and read each frame individually. I would like to do some motion detection for surveillance purposes.
我现在想做的是用 Python 解析这个流并单独读取每一帧。我想做一些用于监视目的的运动检测。
I am completely lost on where to start on this task. Can anyone point me to a good tutorial? If this is not achievable via Python, what tools/languages can I use to accomplish this?
我完全不知道从哪里开始这项任务。谁能指点我一个好的教程?如果这无法通过 Python 实现,我可以使用哪些工具/语言来实现?
回答by synthesizerpatel
Depending on the stream type, you can probably take a look at this project for some ideas.
根据流类型,您可能可以查看此项目以获取一些想法。
https://code.google.com/p/python-mjpeg-over-rtsp-client/
https://code.google.com/p/python-mjpeg-over-rtsp-client/
If you want to be mega-pro, you could use something like http://opencv.org/(Python modules available I believe) for handling the motion detection.
如果你想成为超级专业人士,你可以使用类似http://opencv.org/(我相信可用的 Python 模块)来处理运动检测。
回答by vktec
Bit of a hacky solution, but you can use the VLC python bindings(you can install it with pip install python-vlc
) and play the stream:
有点笨拙的解决方案,但您可以使用VLC python 绑定(您可以使用 安装它pip install python-vlc
)并播放流:
import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()
Then take a snapshot every second or so:
然后每隔一秒左右拍一张快照:
while 1:
time.sleep(1)
player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)
And then you can use SimpleCVor something for processing (just load the image file '.snapshot.tmp.png'
into your processing library).
然后您可以使用SimpleCV或其他东西进行处理(只需将图像文件加载'.snapshot.tmp.png'
到您的处理库中)。
回答by deepu
Hi reading frames from video can be achieved using python and OpenCV . Below is the sample code. Works fine with python and opencv2 version.
您可以使用 python 和 OpenCV 实现从视频中读取帧。下面是示例代码。适用于 python 和 opencv2 版本。
import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)
dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("TestVideo.mp4")
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
else:
cv2.imshow('frame', frame)
#The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
name = "rec_frame"+str(count)+".jpg"
cv2.imwrite(os.path.join(dirname,name), frame)
count += 1
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
回答by Pradeep Singh Chauhan
use opencv
使用 opencv
video=cv2.VideoCapture("rtsp url")
and then you can capture framse. read openCV documentation visit: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
然后你可以捕获帧。阅读 openCV 文档访问:https: //docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
回答by venkat
Using the same method listed by "depu" worked perfectly for me. I just replaced "video file" with "RTSP URL" of actual camera. Example below worked on AXIS IP Camera. (This was not working for a while in previous versions of OpenCV) Works on OpenCV 3.4.1 Windows 10)
使用“depu”列出的相同方法对我来说非常有效。我只是用实际相机的“RTSP URL”替换了“视频文件”。下面的示例适用于 AXIS IP 摄像机。(这在以前版本的 OpenCV 中暂时不起作用)适用于 OpenCV 3.4.1 Windows 10)
import cv2
cap = cv2.VideoCapture("rtsp://root:[email protected]:554/axis-media/media.amp")
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
回答by El Sampsa
Here is yet one more option
这里还有一个选择
It's much more complicated than the other answers. :-O
它比其他答案复杂得多。:-O
But this way, with just one connection to the camera, you could "fork" the same stream simultaneously to several multiprocesses, to the screen, recast it into multicast, write it to disk, etc.
但是这样,只需一个连接到相机,您就可以将同一流同时“分叉”到多个多进程、屏幕、将其重新转换为多播、将其写入磁盘等。
.. of course, just in the case you would need something like that (otherwise you'd prefer the earlier answers)
..当然,以防万一你需要类似的东西(否则你更喜欢早期的答案)
Let's create two independent python programs:
让我们创建两个独立的 Python 程序:
(1) Server program (rtsp connection, decoding) server.py
(1)服务端程序(rtsp连接,解码)server.py
(2) Client program (reads frames from shared memory) client.py
(2) 客户端程序(从共享内存中读取帧)client.py
Server must be started before the client, i.e.
服务器必须在客户端之前启动,即
python3 server.py
And then in another terminal:
然后在另一个终端:
python3 client.py
Here is the code:
这是代码:
(1) server.py
(1) server.py
import time
from valkka.core import *
# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000
# define rgb image dimensions
width =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name ="cam_example"
shmem_buffers =10
shmem_filter =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)
avthread =AVThread("avthread",interval_filter)
av_in_filter =avthread.getFrameFilter()
livethread =LiveThread("livethread")
ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:[email protected]", 1, av_in_filter)
avthread.startCall()
livethread.startCall()
avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)
# all those threads are written in cpp and they are running in the
# background. Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)
# stop threads
livethread.stopCall()
avthread.stopCall()
print("bye")
(2) client.py
(2) 客户端.py
import cv2
from valkka.api2 import ShmemRGBClient
width =1920//4
height =1080//4
# This identifies posix shared memory - must be same as in the server side
shmem_name ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10
client=ShmemRGBClient(
name =shmem_name,
n_ringbuffer =shmem_buffers,
width =width,
height =height,
mstimeout =1000, # client timeouts if nothing has been received in 1000 milliseconds
verbose =False
)
while True:
index, isize = client.pull()
if (index==None):
print("timeout")
else:
data =client.shmem_list[index][0:isize]
img =data.reshape((height,width,3))
img =cv2.GaussianBlur(img, (21, 21), 0)
cv2.imshow("valkka_opencv_demo",img)
cv2.waitKey(1)
If you got interested, check out some more in https://elsampsa.github.io/valkka-examples/
如果您有兴趣,请在https://elsampsa.github.io/valkka-examples/ 中查看更多信息