-1

For a research project at the institute I am working at, we are systematically collecting Street View Panoramas in certain areas.

In our country (Germany), a lot of buildings are censored. As I understand it, this is because according to our laws, Google must remove any personally identifying information upon request.

That is fine and I'm not looking to take away people's constitutional rights.

What I would like to be able to do is programmatically determine whether an image has one (or a certain percentage) of these blurred tiles in it, so we can exclude them as they are not useful to us.

I had a look at the metadata that I receive from a street view api request, but it did not look like there was such a parameter. Maybe I'm looking in the wrong place, though?

Thank you for your help :)

PS: "Alternative" solutions are also welcome - I have looked quickly into whether this kind of thing might be able to be done with certain image evaluation algorithms.

1 Answers1

1

This might be a difficult/impossible task.

Blurred areas should have a lower noise amplitude, and you can enhance this by taking the gradient amplitude (possibly followed by equalization to increase contrast).

Anyway, real world images can also feature very uniform areas or slow shades, and if the image has low noise, there will be no way to distinguish them from blurred areas.

In addition, the images may be JPEG compressed, so that JPEG artefacts can be present and can strongly alter the uniformity and/or noise.


If a censored area is displayed as big pixels, then you have more luck: you can detect small squares of a uniform color, arranged in a grid. This never occurs in natural images. (But unfortunately again, lossy compression will make it harder.)

  • There is a part of it that makes me hopeful, which is that these blurred areas TEND to appear only in rectangle or at least close-to-rectangle shapes, and they are USUALLY at least horizontally sandwiched inbetween otherwise busy areas. Still, the sky might be an issue... But those specific things might mean an algorithm could at least distinguish between sky&blur maybe 90% of the time, which would already be a massive help. There's also the fact that, well, the sky is blue, and the blurred areas usually are not. I'm going to be experimenting a bit :) Thank you for your answer – user3450796 Dec 04 '20 at 12:59