1

I have three python files: glob_var.py, read_cam.py, read_globVar.py. Their contents are as below: glob_var.py:

globVar = {}
def set(name, val):
    globVar[name] = val

def get(name):
    val = globVar.get(name, None)
    return val

read_cam.py

import cv2
import glob_var

if __name__ == '__main__':
    cam = cv2.VideoCapture(0)
    key = 0
    while key != 27:
        ret, img = cam.read()
        cv2.imshow('img', img)

        key = cv2.waitKey(1) & 0xFF
        glob_var.set('image', img)

read_globVar.py

import glob_var
import cv2
from time import sleep

if __name__ == '__main__':
    key = 0
    while key != 27:
        img = glob_var.get('image')
        if img is None:
            print(f"no image in globVar")
            sleep(1)
            continue

        print(f"read image with shape {img.shape}")
        cv2.imshow('image', img)
        key = cv2.waitKey(1) & 0xFF

From those three python flies, I think you guys know what I want to do. Yes, I want read_cam.py to read images from the camera and broadcast it to a global variable. Then read_globVar.py can get the image an show it. I run read_cam.py in one terminal and read_globVar.py in another one. But I did not make it work properly. Is what I am thinking possible? How can I manage it? Thanks a lot!

=====update1: Pub and Sub in python=====
I have used the ROS(Robot Operating System) system for a while. It provide the pub and sub funtion to exchange variables between different programs or so called node. So my question is that is there any package in python provide such function? Redis provide this, is it the fastest or best way?

ToughMind
  • 987
  • 1
  • 10
  • 28
  • 1
    programs don't shares memory. Vars are global for all functions and modules but in one program. One program can save values in file and other can read from this file. Or they can use the same database. They can also use sockets to send from one program to another. Or they can use external program - like Celery or some queue - to send data from one program to another. – furas Jul 09 '19 at 02:39
  • if you will use file to share data then you can have problem when second program will have no time to read all data from file and first program write new data in place of old data. Queue doesn't have this problem. ie. [RabbitMQ](https://www.rabbitmq.com/tutorials/tutorial-two-python.html) – furas Jul 09 '19 at 02:47

3 Answers3

2

You could use Redis to do this. It is a very fast, in-memory data structure server that can serve strings, integers, hashes, lists, queues, sets, ordered sets, images. It is free and simple to install on macOS, Linux and Windows.

Also, you can read or write Redis values with bash, Python, PHP, C/C++ or many other languages. Furthermore, you can read or write to or from a server across the network or across the world, just change the IP address in the initial connection. So, effectively you could acquire images in Python on your Raspberry Pi under Linux and store them and process them on your PC under Windows in C/C++.

Then you just put your images into Redis, named as Camera1 or Entrance or put them in a sorted hash so you can buffer images by frame number. You can also give images (or other data structures) a "Time-To-Live" so that your RAM doesn't fill up.

Here's the bones of your code roughly rewritten to use Redis. No serious error checking or flexibility built in for the moment. It all runs fine.

Here is read_cam.py:

#!/usr/bin/env python3

import cv2
import struct
import redis
import numpy as np

def toRedis(r,a,n):
   """Store given Numpy array 'a' in Redis under key 'n'"""
   h, w = a.shape[:2]
   shape = struct.pack('>II',h,w)
   encoded = shape + a.tobytes()

   # Store encoded data in Redis
   r.set(n,encoded)
   return

if __name__ == '__main__':

    # Redis connection
    r = redis.Redis(host='localhost', port=6379, db=0)

    cam = cv2.VideoCapture(0)
    key = 0
    while key != 27:
        ret, img = cam.read()
        cv2.imshow('img', img)

        key = cv2.waitKey(1) & 0xFF
        toRedis(r, img, 'image')

And here is read_globvar.py:

#!/usr/bin/env python3

import cv2
from time import sleep
import struct
import redis
import numpy as np

def fromRedis(r,n):
   """Retrieve Numpy array from Redis key 'n'"""
   encoded = r.get(n)
   h, w = struct.unpack('>II',encoded[:8])
   a = np.frombuffer(encoded, dtype=np.uint8, offset=8).reshape(h,w,3)
   return a

if __name__ == '__main__':
    # Redis connection
    r = redis.Redis(host='localhost', port=6379, db=0)

    key = 0
    while key != 27:
        img = fromRedis(r,'image')

        print(f"read image with shape {img.shape}")
        cv2.imshow('image', img)
        key = cv2.waitKey(1) & 0xFF

Note that you could equally store the image height and width in a JSON and store that in Redis instead of the struct.pack and struct.unpack stuff I did.

Note too that you could encode your image as a JPEG in memory and store the JPEG in Redis (instead of a Numpy array) and that might save memory and network bandwidth.

Either way, the concept of using Redis is the same.

Mark Setchell
  • 191,897
  • 31
  • 273
  • 432
  • Thank you for your solution sir. I want to do this to save time. I think I run it through the Redis way is faster than serial way in one program. Am I right? – ToughMind Jul 10 '19 at 01:18
  • You can write the system time at acquisition into each video frame and record the two displays and playback in slow motion to measure the latency/lag. Or you can store the time with each frame as a number in Redis and subtract from the system time when displaying to find the lag. Or you can use a Redis LIST with LPUSH and BRPOP to put 100 frames in and get 100 frames out. You could also use Python multiprocessing and send the frames via a multiprocessing **Queue**. You didn't mention performance in your question, by the way. – Mark Setchell Jul 10 '19 at 07:11
1

You can use a shared array from Python's multiprocessing module to quickly share large volumes of data between processes. I don't have any completed, tested code for you like the Redis answer I suggested, but I have enough to hopefully get you started.

So you would use:

from multiprocessing import Process, Queue
from multiprocessing.sharedctypes import Array
from ctypes import c_uint8

Then in your main, you would declare a large Array, probably big enough for say 2-4 of your large images:

bufShape = (1080, 1920,3) # 1080p

and

# Create zeroed out shared array
buffer = Array(c_uint8, bufShape[0] * bufShape[1] * bufShape[2])
# Make into numpy array
buf_arr = np.frombuffer(buffer.get_obj(), dtype=c_uint8)
buf_arr.shape = bufShape

# Create a list of workers
workers = [Worker(1, buffer, str(i)) for i in range(2)]

# Start the workers
for worker in workers:
    worker.start()

Then you would derive your workers from the Process class like this:

class Worker(Process):
    def __init__(self, q_size, buffer, name=''):
        super().__init__()
        self.queue = Queue(q_size)
        self.buffer = buffer
        self.name = name

    def run(self,):
        buf_arr = np.frombuffer(self.buffer.get_obj(), dtype=c_uint8)
        buf_arr.shape = bufShape
        while True:
            item = self.queue.get()
            ...

You can see at the start of run() that the worker just makes a Numpy Array from the big shared buffer, so the worker is reading what the main program is writing but hopefully you synchronise it so that while main is writing frames 2-4, a worker is reading frame 1.

Then hopefully, you can see that the main program can tell a worker that there is a frame of data by writing a simple frame index into the worker's queue (rather than sending the whole frame itself) by using:

worker.queue.put(i)
Mark Setchell
  • 191,897
  • 31
  • 273
  • 432
1

I have written an example of how to share images using memory-mapped file here: https://github.com/off99555/python-mmap-ipc

It's a feature that's already available in most languages. The basic idea is that we will write the image to a virtual file and then read it on another process. It has latency around 3-4ms which is minimal compared to the latency that is inherent in the camera. This approach is faster than internet protocols like TCP/IP, HTTP, etc. I've already tested with gRPC and ZeroMQ. They are all slower than the memory-mapped file approach.

off99555
  • 3,797
  • 3
  • 37
  • 49
  • This solution is only capable on Window system. – Hung Le Aug 23 '20 at 13:33
  • Linux should also have memory mapped file feature. I saw in python mmap doc that it supports Unix. – off99555 Aug 23 '20 at 14:29
  • Yeah, I know that. The Python can fully support the memory-mapped file feature. However, I've tested with the 2 separate files like your example code. It literally did not work. However, if I put all the things in one file. The data can be put inside the memory-mapped file and get out properly. – Hung Le Aug 23 '20 at 16:37
  • Furthermore, I've tested with Redis Queue, the latency is approximately 20ms for 2k resolution image. This is too slow for my application that need a real-time processing. I think the memory-mapped file is a good alternative solution but it lacks of feature support in Linux :( – Hung Le Aug 23 '20 at 16:39
  • That's weird. Memory mapped file should work with different processes. Maybe Unix supports mmap differently. You might need to check how to properly assign the file name of the memory mapped file. That should be a question that you can ask in another thread. – off99555 Aug 23 '20 at 18:16