0

For a project I am needing to parse pixel data from a large number of online images. I realised it could well be faster to load the images into programme memory with a get request, carry out the required operations, then move onto the next image - removing the necessity for reading and writing these into storage. However in doing this I have ran into several problems, is there a not (overly) complicated way to do this?

Edit: I didn't include code as as far as I can tell everything I've seen (scikit-image, pillow, imagemagick) is a complete dead end. Not looking for somebody to write code for me, just a pointer in the right direction.

  • 2
    welcome to stack overflow. your approach isn't bad, but you should show us what it is and what the problems you are running into are if you want any help. unfortunately stackoverflow is not the place to ask for general advice – Nullman Jan 17 '19 at 13:11
  • Taking a step back, what is the point of this? – Mark Setchell Jan 17 '19 at 13:40
  • We have no idea what the required operations, the problems, the desired output or even the specifics of the inputs are. It's very difficult to suggest a good approach without having at least some concrete information. – Mad Physicist Jan 17 '19 at 13:42

1 Answers1

0

Its easy to load image directly from url.

import PIL
from PIL import Image
import urllib2

url = "https://cdn.pixabay.com/photo/2013/07/12/12/58/tv-test-pattern-146649_1280.png"
img = PIL.Image.open(urllib2.urlopen(url))

Image is now loaded. Getting pixels is also easy: Get pixel's RGB using PIL

  • Nice! You'd probably want to do (eg.) 10 requests at once, since the `get` will involve a lot of waiting. Does urllib2 support async open? – jcupitt Jan 18 '19 at 02:18