This sort of functionality is intentionally restricted by most browsers, for security reasons (read up on Same Origin Policy, some of the types of attack it is intended to restrict - XSS and CSRF - and ways to get around it, including CORS if you have control of both server environments).
Since you don't fall into the category where you can do things by the book, by implementing CORS etc., you have to go the long way around. Essentially, in order to be able to grab the metadata of any site, you'll need to do the grabbing from a server.
In this case, the server is in fact a client and therefore won't be restricted by those policies (it sounds confusing, but basically the server asks another server for the page in exactly the same way that your browser client does).
Depending what you're trying to do, you might want this as a proxy or as a separate app.
As a standalone app you'd write a simple script that runs on a server somewhere and does your scanning for you, putting the results in a DB on your own environment that your browser can access (which is more or less the way Facebook does it).
As a proxy, you'd write a similar script but instead of being triggered by something like a cron job or time-based trigger, and saving the results in a DB, it would be triggered by a request from your front end, go and grab the other page, scan the metadata and then return it to your browser client.
The main downside of this is that you're putting extra load on your server every time you ask for something, meaning you need to be careful not to overload your hosting environment. This is presumably why FB etc go down the 'server app' route.
It sounds like a pain but it's actually pretty trivial to put together, and there really isn't an alternative if you want to be able to scan anything, and not just things you make or can ask people to configure.