I'm training a neural net using simulated images, and one of the things that happens in real life is low quality JPEG compression. It fuzzes up sharp edges in a particular way. Does anyone have an efficient way to simulate these effects? By that I mean create a corrupted version of a clean input. The images are grayscale, stored as numpy arrays.
Asked
Active
Viewed 1,436 times
6
-
1Write them as low-quality JPEGs and read them back, maybe? – Mark Setchell Mar 18 '20 at 16:17
-
I'm considering that, perhaps writing to ramdisk. The simulation is otherwise very fast, so I don't want to bottleneck it with a read/write to disk. – Mastiff Mar 18 '20 at 16:21
-
1Have a look here to see how to write a JPEG to memory with `io.BytesIO` https://stackoverflow.com/a/52281257/2836621 – Mark Setchell Mar 18 '20 at 16:23
-
It looks like I have a solution using imageio.imwrite() and imread(), plus io.BytesIO. Thanks. – Mastiff Mar 18 '20 at 16:41
-
Excellent! Feel free to write up an answer to your own question for all to see and refer to, then accept it as correct and bag the points... perfectly ok. – Mark Setchell Mar 18 '20 at 16:43
1 Answers
4
Thanks to the answers in the comments, here is a solution which saves the image as JPEG and reads it back in, all in memory using standard python libraries.
import io
import imageio
# Image is 2D numpy array, q is quality 0-100
def jpegBlur(im,q):
buf = io.BytesIO()
imageio.imwrite(buf,im,format='jpg',quality=q)
s = buf.getbuffer()
return imageio.imread(s,format='jpg')
In my function I also pre- and post-scaled the image to convert from float64 to uint8 and back again, but this is the basic idea.

Mastiff
- 2,101
- 2
- 23
- 38