3

I just started experimenting with Python and image processing. I followed this very well structured tutorial: http://pythonvision.org/basic-tutorial/ . Everything in the tutorial seems to work, with the image provided (the one with the cells). Now I wanted to try how this behaves when inputting another image. So I took another image (the one at the bottom of this post) and suddenly stuff is behaving differently. First off, pylab.show() doesn't show the image heatmap but the regular image, while it should give the heatmap when no colormap is defined.

As a cause of this everything is behaving differently and it only recognises one component (everything that is not white). What am I missing? Does the input image have to be black and white/ grayscale? Does .jpg and .jpeg matter?

I can't seem to find the problem, help would be appreciated.

This code should give the default heatmap view but gives the regular image instead:

dna = mahotas.imread('tools.jpg')
dna = dna.squeeze()

pylab.imshow(dna)
pylab.show()

The image I'm trying to use: enter image description here

Cœur
  • 37,241
  • 25
  • 195
  • 267
E. V. d. B.
  • 815
  • 2
  • 13
  • 30

1 Answers1

1

Most likely the image you're inputting is three channel (r,g,b) and the example image is grayscale/1-channel. Matplotlib will try to apply a colormap to a 1-channel image, but will render the three-channel as is. You can use scikit-image to downconvert:

from skimage.color import rgb2gray
img_gray = rgb2gray(img)
pylab.imshow(img_gray)

The library you're using for image processing may also have these color-conversion utilities.

Adam Hughes
  • 14,601
  • 12
  • 83
  • 122
  • This has already been a huge help, pymorph can transfer to grayscale. I'm still struggling though because it still only detects one object being all the tools as one, I don't really see why because not all objects are touching on the image. – E. V. d. B. Jan 19 '15 at 19:11
  • You're trying to segment the tools from background? – Adam Hughes Jan 19 '15 at 19:13
  • Yes but I think I know what is wrong! it interprets the background as main object (because it's white?) so that is the one object and the tools are seen as background. I somehow need to make it inverse... Would inverting colors of the image be a good method? – E. V. d. B. Jan 19 '15 at 19:14
  • Honestly, image segmentation is a huge discipline and there's many approaches to it. In your case, you're lucky because your backbround is completely white, so you can just retain pixels that are not white and that should be sufficient. I only use scikit-image in Python for my image processing, so I don't know what your library is doing. If you post this as a separate question and link it, I will put the solution to that problem using scikit-image if you'd like. – Adam Hughes Jan 19 '15 at 19:18
  • That would be great, I'll try making a separate topic in a bit! Thanks in advance! – E. V. d. B. Jan 19 '15 at 19:20
  • Just add a link in the comment here and I'll get a notification. – Adam Hughes Jan 19 '15 at 19:20
  • The question is ready but I have to wait another 40 minutes before I can post again, will post link here hope you have a minute when it gets posted ;) – E. V. d. B. Jan 19 '15 at 19:28
  • The seperate question is here: http://stackoverflow.com/questions/28032722/detect-objects-on-a-white-background-in-python thanks in advance! – E. V. d. B. Jan 19 '15 at 20:09