I'm working on a way to handle hardware-based bitmap animation. As an input, I've got an image sequence of a simple bitmap (it's not a video, it's more like simple shapes, even though they might contain bitmap fills). I'm making a texture atlas of this animation (so it can be rendered quickly with GPU), and since this sequence sometimes has most part of it standing still while a small part of it is animating, I need an algorithm that can find the "common parts" between two images, so I can save memory.
The images might not have the same size (if an object is growing or shrinking, for example), so I need a way to detect the biggest common area between the two. I've seen this answer and it partly solves my problem. I'd like to know, though, if there is already a better algorithm for my case, specially because since the sizes can vary, one image is not necessarily contained within the other, but I'd need to find the common parts between the two.