在python opencv中通过网络发送实时视频帧
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/30988033/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Sending live video frame over network in python opencv
提问by atakanyenel
I'm trying to send live video frame that I catch with my camera to a server and process them. I'm usig opencv for image processing and python for the language. Here is my code
我正在尝试将我用相机捕捉到的实时视频帧发送到服务器并进行处理。我使用 opencv 进行图像处理,使用 python 进行语言处理。这是我的代码
client_cv.py
client_cv.py
import cv2
import numpy as np
import socket
import sys
import pickle
cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))
while True:
ret,frame=cap.read()
print sys.getsizeof(frame)
print frame
clientsocket.send(pickle.dumps(frame))
server_cv.py
server_cv.py
import socket
import sys
import cv2
import pickle
import numpy as np
HOST=''
PORT=8089
s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print 'Socket created'
s.bind((HOST,PORT))
print 'Socket bind complete'
s.listen(10)
print 'Socket now listening'
conn,addr=s.accept()
while True:
data=conn.recv(80)
print sys.getsizeof(data)
frame=pickle.loads(data)
print frame
cv2.imshow('frame',frame)
This code gives me end of file error, which is logical because the data always keep coming to the server and pickle doesn't know when to finish. My search on the internet made me use pickle but it doesn't work so far.
这段代码给了我文件结束错误,这是合乎逻辑的,因为数据总是不断地到达服务器,而pickle 不知道何时完成。我在互联网上的搜索使我使用了泡菜,但到目前为止它不起作用。
Note: I set conn.recv
to 80 because that's the number I get when I say print sys.getsizeof(frame)
.
注意:我设置conn.recv
为 80,因为这是我说print sys.getsizeof(frame)
.
采纳答案by mguijarr
Few things:
几样东西:
- use
sendall
instead ofsend
since you're not guaranteed everything will be sent in one go pickle
is ok for data serialization but you have to make a protocol of you own for the messages you exchange between the client and the server, this way you can know in advance the amount of data to read for unpickling (see below)- for
recv
you will get better performance if you receive big chunks, so replace 80 by 4096 or even more - beware of
sys.getsizeof
: it returns the size of the object in memory, which is not the same as the size (length) of the bytes to send over the network ; for a Python string the two values are not the same at all - be mindful of the size of the frame you are sending. Code below supports a frame up to 65535. Change "H" to "L" if you have a larger frame.
- 使用
sendall
而不是send
因为你不能保证所有东西都会一次性发送 pickle
数据序列化是可以的,但是您必须为客户端和服务器之间交换的消息制定自己的协议,这样您就可以提前知道要读取的数据量以进行解压缩(见下文)- 因为
recv
如果您收到大块,您将获得更好的性能,因此将 80 替换为 4096 甚至更多 - 注意
sys.getsizeof
:它返回内存中对象的大小,这与通过网络发送的字节的大小(长度)不同;对于 Python 字符串,这两个值根本不同 - 请注意您发送的帧的大小。下面的代码支持高达 65535 的帧。如果您有更大的帧,请将“H”更改为“L”。
A protocol example:
协议示例:
client_cv.py
客户端_cv.py
import cv2
import numpy as np
import socket
import sys
import pickle
import struct ### new code
cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))
while True:
ret,frame=cap.read()
data = pickle.dumps(frame) ### new code
clientsocket.sendall(struct.pack("H", len(data))+data) ### new code
server_cv.py
server_cv.py
import socket
import sys
import cv2
import pickle
import numpy as np
import struct ## new
HOST=''
PORT=8089
s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print 'Socket created'
s.bind((HOST,PORT))
print 'Socket bind complete'
s.listen(10)
print 'Socket now listening'
conn,addr=s.accept()
### new
data = ""
payload_size = struct.calcsize("H")
while True:
while len(data) < payload_size:
data += conn.recv(4096)
packed_msg_size = data[:payload_size]
data = data[payload_size:]
msg_size = struct.unpack("H", packed_msg_size)[0]
while len(data) < msg_size:
data += conn.recv(4096)
frame_data = data[:msg_size]
data = data[msg_size:]
###
frame=pickle.loads(frame_data)
print frame
cv2.imshow('frame',frame)
You can probably optimize all this a lot (less copying, using the buffer interface, etc) but at least you can get the idea.
您可能可以优化所有这些(减少复制,使用缓冲区接口等),但至少您可以了解这个想法。
回答by Rohan Sawant
After months of searching the internet, this is what I came up with, I have neatly packaged it into classes, with unit tests and documentation as SmoothStreamcheck it out, it was the only simple and working version of streaming I could find anywhere.
经过几个月的互联网搜索,这就是我想出的,我将它整齐地打包到类中,使用单元测试和文档作为SmoothStream检查它,这是我可以在任何地方找到的唯一简单且有效的流媒体版本。
I used this code and wrapped mine around it.
我使用了这段代码并将我的代码包裹起来。
Viewer.py
查看器.py
import cv2
import zmq
import base64
import numpy as np
context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, np.unicode(''))
while True:
try:
frame = footage_socket.recv_string()
img = base64.b64decode(frame)
npimg = np.fromstring(img, dtype=np.uint8)
source = cv2.imdecode(npimg, 1)
cv2.imshow("Stream", source)
cv2.waitKey(1)
except KeyboardInterrupt:
cv2.destroyAllWindows()
break
Streamer.py
流光.py
import base64
import cv2
import zmq
context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://localhost:5555')
camera = cv2.VideoCapture(0) # init the camera
while True:
try:
grabbed, frame = camera.read() # grab the current frame
frame = cv2.resize(frame, (640, 480)) # resize the frame
encoded, buffer = cv2.imencode('.jpg', frame)
jpg_as_text = base64.b64encode(buffer)
footage_socket.send(jpg_as_text)
except KeyboardInterrupt:
camera.release()
cv2.destroyAllWindows()
break
回答by Biranchi
I have made it to work on my MacOS.
我已经让它在我的 MacOS 上工作了。
I used the code from @mguijarr and changed the struct.pack from "H" to "L".
我使用了@mguijarr 中的代码并将struct.pack 从“H”更改为“L”。
Server.py:
==========
import socket
import sys
import cv2
import pickle
import numpy as np
import struct ## new
HOST=''
PORT=8089
s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print 'Socket created'
s.bind((HOST,PORT))
print 'Socket bind complete'
s.listen(10)
print 'Socket now listening'
conn,addr=s.accept()
### new
data = ""
payload_size = struct.calcsize("L")
while True:
while len(data) < payload_size:
data += conn.recv(4096)
packed_msg_size = data[:payload_size]
data = data[payload_size:]
msg_size = struct.unpack("L", packed_msg_size)[0]
while len(data) < msg_size:
data += conn.recv(4096)
frame_data = data[:msg_size]
data = data[msg_size:]
###
frame=pickle.loads(frame_data)
print frame
cv2.imshow('frame',frame)
key = cv2.waitKey(10)
if (key == 27) or (key == 113):
break
cv2.destroyAllWindows()
Client.py:
==========
import cv2
import numpy as np
import socket
import sys
import pickle
import struct ### new code
cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))
while True:
ret,frame=cap.read()
data = pickle.dumps(frame) ### new code
clientsocket.sendall(struct.pack("L", len(data))+data) ### new code
回答by nareddyt
I changed the code from @mguijarr to work with Python 3. Changes made to the code:
我将代码从 @mguijarr 更改为使用 Python 3。对代码所做的更改:
data
is now a byte literalinstead of a string literal- Changed "H" to "L" to send larger frame sizes. Based on the documentation, we can now send frames of size 2^32 instead of just 2^16.
Server.py
服务器.py
import pickle
import socket
import struct
import cv2
HOST = ''
PORT = 8089
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket created')
s.bind((HOST, PORT))
print('Socket bind complete')
s.listen(10)
print('Socket now listening')
conn, addr = s.accept()
data = b'' ### CHANGED
payload_size = struct.calcsize("L") ### CHANGED
while True:
# Retrieve message size
while len(data) < payload_size:
data += conn.recv(4096)
packed_msg_size = data[:payload_size]
data = data[payload_size:]
msg_size = struct.unpack("L", packed_msg_size)[0] ### CHANGED
# Retrieve all data based on message size
while len(data) < msg_size:
data += conn.recv(4096)
frame_data = data[:msg_size]
data = data[msg_size:]
# Extract frame
frame = pickle.loads(frame_data)
# Display
cv2.imshow('frame', frame)
cv2.waitKey(1)
Client.py
客户端.py
import cv2
import numpy as np
import socket
import sys
import pickle
import struct
cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))
while True:
ret,frame=cap.read()
# Serialize frame
data = pickle.dumps(frame)
# Send message length first
message_size = struct.pack("L", len(data)) ### CHANGED
# Then data
clientsocket.sendall(message_size + data)
回答by abhiTronix
I'm kind of late but my powerful & threaded VidGearVideo Processing python library now provide NetGear API, which is exclusively designed to transfer video frames synchronously between interconnecting systems over the network in real-time. Here's an example:
我有点晚了,但我强大的线程VidGear视频处理 python 库现在提供NetGear API,它专门用于在网络上的互连系统之间实时同步传输视频帧。下面是一个例子:
A. Server End:(Bare-Minimum example)
A. 服务器端:(Bare-Minimum 示例)
Open your favorite terminal and execute the following python code:
打开你最喜欢的终端并执行以下 python 代码:
Note:You can end streaming anytime on both server and client side by pressing [Ctrl+c] on your keyboard on server end!
注意:您可以通过在服务器端键盘上按 [Ctrl+c] 随时在服务器端和客户端结束流式传输!
# import libraries
from vidgear.gears import VideoGear
from vidgear.gears import NetGear
stream = VideoGear(source='test.mp4').start() #Open any video stream
server = NetGear() #Define netgear server with default settings
# infinite loop until [Ctrl+C] is pressed
while True:
try:
frame = stream.read()
# read frames
# check if frame is None
if frame is None:
#if True break the infinite loop
break
# do something with frame here
# send frame to server
server.send(frame)
except KeyboardInterrupt:
#break the infinite loop
break
# safely close video stream
stream.stop()
# safely close server
writer.close()
B. Client End:(Bare-Minimum example)
B. 客户端:(Bare-Minimum 示例)
Then open another terminal on the same system and execute the following python code and see the output:
然后在同一系统上打开另一个终端并执行以下python代码并查看输出:
# import libraries
from vidgear.gears import NetGear
import cv2
#define netgear client with `receive_mode = True` and default settings
client = NetGear(receive_mode = True)
# infinite loop
while True:
# receive frames from network
frame = client.recv()
# check if frame is None
if frame is None:
#if True break the infinite loop
break
# do something with frame here
# Show output window
cv2.imshow("Output Frame", frame)
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
# close output window
cv2.destroyAllWindows()
# safely close client
client.close()
More advanced usage and related docs can be found here:https://github.com/abhiTronix/vidgear/wiki/NetGear
更高级的用法和相关文档可以在这里找到:https : //github.com/abhiTronix/vidgear/wiki/NetGear
回答by Phonix
Recently I publish imagiz package for Fast and none blocking live video streaming over network with OpenCV and ZMQ.
最近我发布了 imagiz 包,用于使用 OpenCV 和 ZMQ 通过网络进行快速且无阻塞的实时视频流。
https://pypi.org/project/imagiz/
https://pypi.org/project/imagiz/
Client :
客户 :
import imagiz
import cv2
client=imagiz.Client("cc1",server_ip="localhost")
vid=cv2.VideoCapture(0)
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 90]
while True:
r,frame=vid.read()
if r:
r, image = cv2.imencode('.jpg', frame, encode_param)
client.send(image)
else:
break
Server :
服务器 :
import imagiz
import cv2
server=imagiz.Server()
while True:
message=server.recive()
frame=cv2.imdecode(message.image,1)
cv2.imshow("",frame)
cv2.waitKey(1)
回答by Anagnostou John
as @Rohan Sawant said i used zmq library without using base64 encoding. here is the new code
正如@Rohan Sawant 所说,我使用了 zmq 库而不使用 base64 编码。这是新代码
Streamer.py
流光.py
import base64
import cv2
import zmq
import numpy as np
import time
context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://192.168.1.3:5555')
camera = cv2.VideoCapture(0) # init the camera
while True:
try:
grabbed, frame = camera.read() # grab the current frame
frame = cv2.resize(frame, (640, 480)) # resize the frame
encoded, buffer = cv2.imencode('.jpg', frame)
footage_socket.send(buffer)
except KeyboardInterrupt:
camera.release()
cv2.destroyAllWindows()
break
Viewer.py
查看器.py
import cv2
import zmq
import base64
import numpy as np
context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, np.unicode(''))
while True:
try:
frame = footage_socket.recv()
npimg = np.frombuffer(frame, dtype=np.uint8)
#npimg = npimg.reshape(480,640,3)
source = cv2.imdecode(npimg, 1)
cv2.imshow("Stream", source)
cv2.waitKey(1)
except KeyboardInterrupt:
cv2.destroyAllWindows()
break