2

I'm trying to send an OpenCV image in a json and receive it on the other end but I'm running into endless problems encoding and decoding the image

I send it in JSON in the following way:

dumps({"image": b64encode(image[y1:y2, x1:x2]).decode('utf-8')})

On the other end I try to decode it (I need it as a Pillow image):

image = Image.open(BytesIO(base64.b64decode(data['image'])))

But I'm getting Exception cannot identify image file <_io.BytesIO object at 0x7fbd34c98a98>

Also tried:

nparr = np.fromstring(b64decode(data['image']), np.uint8)
image = cv2.imdecode(nparr, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(image)

But then I get 'NoneType' object has no attribute '__array_interface__' coming from Image.fromarray

Any ideas what I'm doing wrong?

K41F4r
  • 1,443
  • 1
  • 16
  • 36
  • You need to be a bit clearer about exactly what you are trying to send and why you want send it as JSON. Do you want to send just the bytes corresponding to the pixels, or a Numpy array, or do you want to send a JPEG or PNG compressed image. Maybe you could share a more complete piece of code for each end - send and receive? – Mark Setchell Apr 28 '19 at 19:36
  • I'm taking a rectangle out of an OpenCV image using the following `image[y1:y2, x1:x2]` (so it should be a numpy array I think) and I want to send it over in json. I don't think it's a good idea to add unrelated code to this, would just make the question less clear that it is now – K41F4r Apr 28 '19 at 21:23
  • Ok, if you are sending a Numpy array across, you'll need this at the receiving end `PILimage = Image.fromarray(NumpyArray)` – Mark Setchell Apr 28 '19 at 21:37
  • Already tried, I'm thinking I might be sending the image wrong – K41F4r Apr 28 '19 at 22:03

1 Answers1

4

Hopefully, this should get you started. I think that what you tried, by sending the unadorned bytes from the Numpy array probably won't work because the receiver will not know the width, height and number of channels in the image, so I used pickle to store that.

#!/usr/bin/env python3

import cv2
import numpy as np
import base64
import json
import pickle
from PIL import Image

def im2json(im):
    """Convert a Numpy array to JSON string"""
    imdata = pickle.dumps(im)
    jstr = json.dumps({"image": base64.b64encode(imdata).decode('ascii')})
    return jstr

def json2im(jstr):
    """Convert a JSON string back to a Numpy array"""
    load = json.loads(jstr)
    imdata = base64.b64decode(load['image'])
    im = pickle.loads(imdata)
    return im

# Create solid red image 
red = np.full((480, 640, 3), [0, 0, 255], dtype=np.uint8)  

# Make image into JSON string
jstr = im2json(red)

# Extract image from JSON string, and convert from OpenCV to PIL reversing BGR to RGB on the way
OpenCVim = json2im(jstr)
PILimage = Image.fromarray(OpenCVim[...,::-1])
PILimage.show()

As you haven't answered my question in the comments about why you want do things this way, it may not be optimal - sending uncompressed, base64-encoded images across a network (presumably) is not very efficient. You might consider JPEG, or PNG encoded data to save network bandwidth, for example.

You could also use cPickle instead.


Note that some folks disapprove of pickle and also the method above uses a lot of network bandwidth. An alternative might be to JPEG compress the image before sending and decompress on the receiving end straight into a PIL Image. Note that this is lossy.

Or change the .JPG extension in the code to .PNG which is loss-less but may be slower and will not work for images with floating point data or 16-bit data (although the latter could be accommodated).

You could also look at TIFF, but again, it depends on the nature of your data, the network bandwidth, the flexibility you need, your CPU's encoding/decoding performance...

#!/usr/bin/env python3

import cv2
import numpy as np
import base64
import json
from io import BytesIO
from PIL import Image

def im2json(im):
    _, imdata = cv2.imencode('.JPG',im)
    jstr = json.dumps({"image": base64.b64encode(imdata).decode('ascii')})
    return jstr

def json2im(jstr):
    load = json.loads(jstr)
    imdata = base64.b64decode(load['image'])
    im = Image.open(BytesIO(imdata))
    return im

# Create solid red image 
red = np.full((480, 640, 3), [0, 0, 255], dtype=np.uint8)  

# Make image into JSON string
jstr = im2json(red)

# Extract image from JSON string into PIL Image
PILimage = json2im(jstr)
PILimage.show()
Mark Setchell
  • 191,897
  • 31
  • 273
  • 432
  • Worked perfectly, thanks! Could you explain this part `OpenCVim[...,::-1]`, why not pass the image as is? – K41F4r Apr 29 '19 at 10:28
  • 2
    OpenCV stores images in a strange BGR order and PIL expects images in normal RGB order, so if you don't do the `OpenCVim[...,::-1]` thing to reverse the channel ordering, your Reds and Blues will be swapped. If speed is important, it is probably quicker to use OpenCV's `cvtColor(...BGR2RGB...)` method. – Mark Setchell Apr 29 '19 at 10:43