-1

I am struggling on finding the solution for this:

I'm trying to create an image stream system where i can get all the frames and pass them through a neural network, but somehow I've not managed to get properly base64 image strings from my functions below. The provided code works perfectly if i just call the decoded image from streaming instead of passing it through my functions where i convert to base64 and read them in memory and make cv2 show them properly.

My server code functions responsible to convert and decode base64 are described below:

Convert image object from stream into base64 BYTES and convert to one STRING (this is working as intended)

def convertImgBase64(image):
    try:
        imgString = base64.b64encode(image).decode('utf-8')
        print('convertida com sucesso')
        return imgString
    except os.error as err :
        print(f"Erro:'{err}'")

Base64 decoder that should convert to a readable cv2 compatible frame (Here is where the error begins):

def readb64(base64_string):
    storage = '/home/caio/Desktop/img/'
    try:
        sbuf = BytesIO()
        sbuf.write(base64.b64decode(str(base64_string)))
        pimg = im.open(sbuf)
        out = open('arq.jpeg', 'wb')
        out.write(sbuf.read())
        out.close()
        print('leu string b64')
        return cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
    except os.error as err :
        print(f"Erro:'{err}'")

This is the current server i am building, but before proceeding i need to accomplish the frame capture correctly.

from io import BytesIO, StringIO
import numpy as np
import cv2
from imutils.video import FPS
import imagezmq
import base64
import darknet
import os
from PIL import Image as im
from numpy import asarray
from time import sleep

#imagezmq protocol receiver from client
image_hub = imagezmq.ImageHub() 

 def convertImgBase64(image):
    try:
        imgString = base64.b64encode(image).decode('utf-8')
        return imgString
    except os.error as err :
        print(f"Error:'{err}'")

def readb64(base64_string):
    try:
        sbuf = BytesIO()
        sbuf.write(base64.b64decode(str(base64_string)))
        pimg = im.open(sbuf)
        return cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
    except os.error as err :
        print(f"Error:'{err}'")

def capture_img():
    while True:
        camera, jpg_buffer = image_hub.recv_jpg()
        buffer = np.frombuffer(jpg_buffer, dtype='uint8')
        imagedecoder = cv2.imdecode(buffer, cv2.IMREAD_COLOR)
        img = im.fromarray(imagedecoder)
        try:
            string = convertImgBase64(imagedecoder)
            cvimg = readb64(string)
            #cv2.imshow(camera, cvimg) this is the line where its not working!
        except os.error as err :
            print(f"Error:'{err}'")

        cv2.imshow(camera, imagedecoder)
        cv2.waitKey(1) #cv2 wont work without this

        image_hub.send_reply(b'OK') #imageZMQ needs acknowledge that its ok

Client code (raspberry pi code) is given below:

import sys

import socket
import time
import cv2
from imutils.video import VideoStream
import imagezmq
import argparse

# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-s", "--server-ip", required=True,
    help="ip address of the server to which the client will connect")
args = vars(ap.parse_args())
# initialize the ImageSender object with the socket address of the
# server
sender = imagezmq.ImageSender(connect_to="tcp://{}:5555".format(
    args["server_ip"]))
# use either of the formats below to specifiy address of display computer
# sender = imagezmq.ImageSender(connect_to='tcp://192.168.1.190:5555')

rpi_name = socket.gethostname()  # send RPi hostname with each image
vs = VideoStream(usePiCamera=True, resolution=(800, 600)).start()
time.sleep(2.0)  # allow camera sensor to warm up
jpeg_quality = 95  # 0 to 100, higher is better quality, 95 is cv2 default
while True:  # send images as stream until Ctrl-C
    image = vs.read()
    ret_code, jpg_buffer = cv2.imencode(
        ".jpg", image, [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_quality])
    sender.send_jpg(rpi_name, jpg_buffer)

My error output now is like: Server error

I have been trying solution from here and here

If you would know another better way to pass an Image Object that i can use to process inside yolo/darknet neural network it could be awesome!!

Thanks!

CaioVC
  • 27
  • 5
  • please reduce your code to a "minimal reproducible example" – Christoph Rackwitz Jun 27 '21 at 20:03
  • 1
    you ought to take a look at `ImageZMQ`, a proper look. your code should **not** contain **any** mentions of base64 or `utf-8`. you are handling binary data. utf-8 has absolutely no place here. – Christoph Rackwitz Jun 27 '21 at 23:56
  • So how could i make this work then? I've tried everything, but no success, i am managing to get the base64 as string but somehow cv2 is not recognising it as a regular base64 string, I even added a 'header = Data,jpeg/base64:...' but no success, i dont know what to do – CaioVC Jun 28 '21 at 00:39
  • 1
    ImageZMQ has examples. look at example 3 if you absolutely require jpeg compression. it's a lot simpler without that. – Christoph Rackwitz Jun 28 '21 at 00:58
  • Actually i dont need jpg compression, as i am just testing it, i just wanted to pass my imagezmq stream as one cv2.videocapture() like, so i can pass it through my custom YOLOV4 darknet framework... i am open to suggestions lol – CaioVC Jun 28 '21 at 01:33

1 Answers1

0

The answers provided by @Christoph Rackwitz are correct. The design of ImageZMQ is to send and receive OpenCV images WITHOUT any base64 encoding. The ImageSender class sends OpenCV images. The ImageHub class receives OpenCV images. Optionally, ImageZMQ can send a jpg buffer (as your Raspberry Pi client code is doing).

Your Raspberry Pi client code is based on the ImageZMQ "send jpg" example.

Your server code should therefore use the matching ImageZMQ "receive jpg" example.

The essence of the ImageZMQ "receive jpg" example code is:

import numpy as np
import cv2
import imagezmq

image_hub = imagezmq.ImageHub()
while True:  # show streamed images until Ctrl-C
    rpi_name, jpg_buffer = image_hub.recv_jpg()
    image = cv2.imdecode(np.frombuffer(jpg_buffer, dtype='uint8'), -1)
    # see opencv docs for info on -1 parameter
    cv2.imshow(rpi_name, image)  # 1 window for each RPi
    cv2.waitKey(1)
    image_hub.send_reply(b'OK')

No base64 decoding required. The variable image already contains an OpenCV image. (FYI, I am the author of ImageZMQ)

Jeff Bass
  • 336
  • 3
  • 7