I am implementing basic global thresholding with Python. A part of the algorithm involves grouping pixels into two containers according to their intensities;
group_1 = []
group_2 = []
for intensity in list(image.getdata()):
if intensity > threshold:
group_1.append[]
else:
group_2.append[]
With images exceeding 0.5 megapixels, this approach typically uses about 5 seconds or more. In every possible approach I need to check every pixel, so I am wondering if there any faster way to do this (by using other methods from PIL, other data structures or algorithms?), or is it simply a Python performance issue?