1

I'm working on a network video streaming solution using a Raspberry PI 3 B+ where low latency is key.

The first method I used, was piping the stdout from raspivid into a netcat TCP stream:

# On the Raspberry:
raspivid -w 640 -h 480 --nopreview -t 0 -o - | nc 192.168.64.104 5000

# On the client:
nc -l -p 5000 | mplayer -nolirc -fps 60 -cache 1024 -

This method has fairly low latency and I was overall satisfied with the results.

However, I need to do some image processing on the clients side. What I did was try to replicate the method above using python. I found a similar solution in the documentation of the 'picamera' Python module:

On the Raspberry:

import io
import socket
import struct
import time
import picamera

# Connect a client socket to my_server:8000 (change my_server to the
# hostname of your server)
client_socket = socket.socket()
client_socket.connect(('my_server', 8000))

# Make a file-like object out of the connection
connection = client_socket.makefile('wb')
try:
    camera = picamera.PiCamera()
    camera.resolution = (640, 480)
    # Start a preview and let the camera warm up for 2 seconds
    camera.start_preview()
    time.sleep(2)

    # Note the start time and construct a stream to hold image data
    # temporarily (we could write it directly to connection but in this
    # case we want to find out the size of each capture first to keep
    # our protocol simple)
    start = time.time()
    stream = io.BytesIO()
    for foo in camera.capture_continuous(stream, 'jpeg'):
        # Write the length of the capture to the stream and flush to
        # ensure it actually gets sent
        connection.write(struct.pack('<L', stream.tell()))
        connection.flush()
        # Rewind the stream and send the image data over the wire
        stream.seek(0)
        connection.write(stream.read())
        # If we've been capturing for more than 30 seconds, quit
        if time.time() - start > 30:
            break
        # Reset the stream for the next capture
        stream.seek(0)
        stream.truncate()
    # Write a length of zero to the stream to signal we're done
    connection.write(struct.pack('<L', 0))
finally:
    connection.close()
    client_socket.close()

On the client:

import io
import socket
import struct
import cv2
import numpy as np

server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)

# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
while True:
    # Read the length of the image as a 32-bit unsigned int. If the
    # length is zero, quit the loop
    image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
    if not image_len:
        break
    # Construct a stream to hold the image data and read the image
    # data from the connection
    image_stream = io.BytesIO()
    image_stream.write(connection.read(image_len))
    # Rewind the stream, open it as an image with opencv and do some
    # processing on it
    image_stream.seek(0)

    data = np.fromstring(image_stream.getvalue(), dtype=np.uint8)
    imagedisp = cv2.imdecode(data, 1)

    cv2.imshow("Frame",imagedisp)
finally:
    connection.close()
    server_socket.close()

This method has a much worse latency and I'm trying to figure out the reason why. Just as the first method, it uses a TCP stream to send frames from a memory buffer.

The goal is just to get frames ready for processing with OpenCV on the client as fast as possible. So if anyone has a better way to achieve that, than the one above, I would appreciate if you'd share it.

Mihael
  • 13
  • 5

1 Answers1

0

This is mainly from another post that I can't find it right now. But I modified the given code there a little bit. On this one, you're looking at 0.35 sec on average for transferring each frame which is still very bad compare to netcat but slightly better than the sequential capture code you mentioned. This one also uses socket but instead of pic, you deal with video frames:

server.py

import socket
import sys
import cv2
import pickle
import numpy as np
import struct ## new
import time

HOST='ip address'
PORT=8089

s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print ('Socket created')

s.bind((HOST,PORT))
print ('Socket bind complete')
s.listen(10)
print ('Socket now listening')

conn,addr=s.accept()

### new
counter=0
data = b''
payload_size = struct.calcsize("<L") 
while True:
    start=time.time()
    while len(data) < payload_size:
        data += conn.recv(8192)
    packed_msg_size = data[:payload_size]
    data = data[payload_size:]
    msg_size = struct.unpack("<L", packed_msg_size)[0]
    while len(data) < msg_size:
        data += conn.recv(8192)
    frame_data = data[:msg_size]
    data = data[msg_size:]
    ###

    frame=pickle.loads(frame_data)

    name='path/to/your/directory'+str(counter)+'.jpg'
    cv2.imwrite(name,frame)
    counter+=1
    end=time.time()
    print("rate is: " ,end-start)

=============

client.py

import cv2
import numpy as np
import socket
import sys
import pickle
import struct ### new code
#cap=cv2.VideoCapture(0)
cap=cv2.VideoWriter()
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('server ip address',8089))
while True:
    ret,frame=cap.read()

    data = pickle.dumps(frame) ### new code
    clientsocket.sendall(struct.pack("<L", len(data))+data) ### new code

=============