Years later after this post was made and this is still a problem, but its not facebook's cache: It is quite often human error (allow me to elaborate)
OG:TYPE effects your image scrape:
- https://ogp.me/#type_article not the same as https://ogp.me/#type_website
Be aware that og:type=website will cause any /sub-pages/ of that url to become "canonical". This means you will have trouble getting your images to update using the scraper no matter what you do.
Consider this "assumption and common mistake"
-<meta property="og:type" content="website" />
=> https://www.example.org (parent)
-<meta property="og:type" content="website" />
=> https://www.example.org/sub-page/
-<meta property="og:type" content="website" />
=> https://www.example.org/sub-page/child-2/
- Ergo: /sub-page/
and /child-2/
will inherit the og:image
of the parent
Those are not "all websites", 1 is a website, the others are articles.
If you do that Facebook will think all of those are canonical and it will put the FIRST og:image into all of them. (try it, you'll see) - if you set the og:url to be your root or parent domain you've told facebook they are all canonical. (there is good reason for that, but its off topic)
Consider this solution (which is what most people "really want")
-<meta property="og:type" content="article" />
=> https://www.example.org/sub-page/
-<meta property="og:type" content="article" />
=> https://www.example.org/sub-page/child-2/
If you do that now Facebook will give you far far less problems with scraping your NEW images.
In closing, YES the cache busters, random vars, changing urls and suggestions here can work, but they will seem like "intermittent voodoo" if the og:type
is not specified correctly.
PS: remember that a CDN or serverside cache will serve to Facebook's scraper even if you "think" you can see the most recent version. (I wont spend any time on this other than to point out it will waste colossal amounts of your time if not double checked.)