It depends a lot on what do you define by "duplicate".
If you are looking for absolutely identical copies (copy-paste), the game is simple. The approach proposed by Safir, with just a few performance improvements, is Ok.
If you want to find almost-exact duplicates, the job suddenly becomes incredibly difficult. Check out this Checking images for similarity with OpenCV for more info.
Now, back to the "simple" approach, it depends on how many pictures you have to compare. Because comparing each image against all the others in a folder with 1000 images gives you 1.000.000 image reads and comparisons. (Because you cannot store them all in RAM at once, you will have to load and unload them a million times) That way is too much for even a powerful desktop processor.
A simple way would be to use a hashing function (as sha2) for each image, and then compare just the hashes. A good ad-hoc "hashing" for images may be the histogram (although for positives you may want to double-check with memcmp).
And even if you try the brute-force approach (comparing each image pixel with the other), a faster way is to use memcmp() instead of accessing images pixel by pixel.