23

I want to use an old script which still uses scipy.misc.imresize() which is not only deprevated but removed entirely from scipy. Instead the devs recommend to use either numpy.array(Image.fromarray(arr).resize()) or skimage.transform.resize().

The exact code line that is no longer working is this:

new_image = scipy.misc.imresize(old_image, 0.99999, interp = 'cubic')

Unfortunately I am not exactly sure anymore what it does exactly. I'm afraid that if I start playing with older scipy versions, my newer scripts will stop working. I have been using it as part of a blurr filter. How do I make numpy.array(Image.fromarray(arr).resize()) or skimage.transform.resize() perform the same action as the above code line? Sorry for the lack of information I provide.

Edit

I have been able to determine what this line does. It converts an image array from this:

[[[0.38332759 0.38332759 0.38332759]
  [0.38770704 0.38770704 0.38770704]
  [0.38491378 0.38491378 0.38491378]
  ...

to this:

[[[57 57 57]
  [59 59 59]
  [58 58 58]
  ...

Edit2

When I use jhansens approach the output is this:

[[[ 97  97  97]
  [ 98  98  98]
  [ 98  98  98]
  ...

I don't get what scipy.misc.imresize does.

Artur Müller Romanov
  • 4,417
  • 10
  • 73
  • 132

6 Answers6

16

You can lookup the documentation and the source code of the deprecated function. In short, using Pillow (Image.resize) you can do:

im = Image.fromarray(old_image)
size = tuple((np.array(im.size) * 0.99999).astype(int))
new_image = np.array(im.resize(size, PIL.Image.BICUBIC))

With skimage (skimage.transform.resize) you should get the same with:

size = (np.array(old_image.size) * 0.99999).astype(int)
new_image  = skimage.transform.resize(old_image, size, order=3)
jdehesa
  • 58,456
  • 7
  • 77
  • 121
  • @ jdehesa Thank you for your effort. Please take a look at the edit section. I tried using both of your approaches but the output array is exactly the same as the input array. Something is missing. I hope the edit section helps. – Artur Müller Romanov Aug 08 '19 at 14:16
  • @ArturMüllerRomanov The function downscales the input image by a factor of 0.99999. Unless the image is very big (100,000 pixels tall or wide), that means it will just remove a single pixel in each dimension (due to float truncation). For the most part, both images should look the same, except for the slight size difference. I am pretty sure the transformation between the two arrays you show in the updated post is not produced by the function that you mention. – jdehesa Aug 08 '19 at 14:25
8

Scipy Official Docs

imresize is now deprecated!
imresize is deprecated in SciPy 1.0.0, and will be removed in 1.3.0. Use Pillow instead:
numpy.array(Image.fromarray(arr).resize()).

from PIL import Image
resized_img = Image.fromarray(orj_img).resize(size=(new_h, new_w))
marikamitsos
  • 10,264
  • 20
  • 26
Talha Çelik
  • 171
  • 3
  • 7
  • 4
    To add, this `resize` module would give us the `PIL.Image.Image` object. To get the numpy array, `resized_img = np.array(resized_img)` – saichand Apr 23 '20 at 04:12
0

It almost looks like that line was a hacky way to transform your array from a 0..1 scale to 0..255 without any actual resizing. If that is the case you could simply do the following:

new_image = (old_image * 255).astype(np.uint8)

However, I do realize that the floats in your first sample array don't quite match the integers in the second...

Update: If you combine the rescaling to 0..255 with a resizing operation, e.g. one of the ways that jdehesa pointed out in their answer, you will reproduce your expected result (up to rounding errors). However, without knowing anything else about your code, I can't imagine that its functionality depends on resizing the image by such a small amount, which is why I'm guessing the purpose of this line of code was to transform the image to 0..255 (which is better done as above).

jhansen
  • 1,096
  • 1
  • 8
  • 17
0

Tensorflow 1:

image = np.array(ndimage.imread(fname, flatten=False))
image = np.array(im)
image = image/255.
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)

Tensorflow 2:

import imageio
im = imageio.imread(fname)
image = np.array(im)
image = image/255.
num_px = 64
my_image = image.reshape((1, num_px*num_px*3)).T # WITHOUT RESIZE
my_image_prediction = predict(my_image, parameters)
mruanova
  • 6,351
  • 6
  • 37
  • 55
0

To add to this, you could import Pillow.Image. However, on a project I'm working on I've found that skimage.transform.resize has the same effect as both, with no need for conversion at the end to an ndarray as this is the return type...

So if you want an image object at the end Pillow is probably more useful but if you are after an array use skimage.transform maybe.

For example with an image being downscaled by a factor of 4:

from skimage.transform import resize

//You already have an image of shape (256,256,3) you are putting into a function
def my_downscale(image_shape):
    downscaled = resize(image_shape, [image_shape // 4, image_shape // 4])
    return downscaled

This would return an ndarray of (64,64,3) as you didn't include the dimensions channel it preserved the last which is 3. As opposed to:

from PIL import image

def my_downscale_pillow(image_shape):
    downscaled = np.array(Image.fromarray(image_shape).resize([image_shape[0] // 4, 
        image_shape[1] // 4))
Jacob Lee
  • 4,405
  • 2
  • 16
  • 37
-1

Just do one thing that resolves all the version 2 problems

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107