well you can brute force the location and scan for the size which will be O(n^6)
if n
is the avg size of side of your map in pixels ...
The location might be speed up by search (accepting not strictly sorted data) for example like this:
which would lead to ~O(n^4.log^2(n))
. But beware the search must be configured properly in order to not skip solution ... The size search can be improved too by using similar technique like I did in here:
Just use different metric so I would create LUT tables of start and end positions for each x
and y
(4 LUT tables) which will speed up the search leading to ~O(n^2.log^2(n))
while creation of LUT is O(n^2)
. btw the same LUTs I sometimes use in OCR like here (last 2 images):
Now problem with this approach is it can not handle concave polygon correctly as there might be more edges per x,y than just 2. So to remedy that you would need to have more LUTs and use them based on position in polygon (divide polygon to "convex" areas)
So putting all these together would look something like this:
approx loop (center x) // ~O(log(n))
approx loop (center y) // ~O(log(n))
grow loop (square size to max using) LUT // O(n)
{
grow loop (x size to max while decreasing original square y size) // O(n)
grow loop (y size to max while decreasing original square x size) // O(n)
use bigger from the above 2 rectangles
}
Just do not forget to use area of polygon / area of rectangle
as approximation error value. This algo is resulting in ~O(n^2.log^2(n))
which is not great but still doable.
Another option is convert your polygon to squares, and use bin-packing and or graph and or backtracking techniques to grow to biggest rectangle ... but those are not my cup of tea so I am not confident enough to create answer about them.