I've been playing with time-lapse photography lately, and using median image stacking on a group of images, or extracted video frames. I've created a little script that works well with a relative few images:
from PIL import Image
import os
import numpy as np
#Create a list of the images' data
imglist = []
for fname in os.listdir("input\\"):
imglist.append(np.array(Image.open("input\\"+fname)))
#Find the median of all image data in the stack, save it
median = np.uint8(np.median(imglist, axis=0))
Image.fromarray(median).save("median.png","PNG")
The obvious problem here is that if too many images are loaded into memory at once, they fill up my meager 8GB of RAM. I have tried approaches which include splitting the image data into chunks and calculating the median one chunk at a time, and splitting the image data by color channel, but when there are that many images to process, the amount of file calls it takes to save the data to the disk image by image causes a massive slowdown.
I am wondering if there's any way I can use some weighting scheme to calculate the median for a few images at a time, then repeat with the results, or use a gimmick like virtual memory, memory-mapped files, or something else to eliminate this excessive memory usage. Any ideas?