Given an array of real-valued numbers, A[1..n,1..n]
, I wish to find the sub-array
B = A[i..j,s..t]
with 1 <= i <= j <= n,
and 1 <= s <= t <= n
such that the sum of the numbers in B
is maximized. Is it possible to solve this using dynamic programming? I spoke to one of the OR professors at Aarhus University, and he didn't know how to do it, and said that he had difficulty seeing how it could have the optimal substructure quality.
But IS it possible? If yes, how? If not, why?
I already know of an algorithm that runs in O(n^3)
time, by reducing it to n(n+1)/2
subproblems of complexity O(n)
, but that seems like it's a bit slow. I know that an optimal algorithm would run in Omega(n)
time, but I'm hoping that dynamic programming could be used for making it run in O(n^2)
time.
Original question summarized
I added this section because I felt some people has misinterpreted the point of my question. The original question was:
- Is it possible to use dynamic programming to solve the above problem in
O(n^2)
time? If yes, how? If no, why not?
Additional questions:
I added a new questions here. More might be added later:
- In order to use dynamic programming, I need to make use of solutions to easily solved subproblems (Or else the point is moot). The structure of the problem is such, that if we take a subarray
B = A[1..m,1..m]
ofA[1..n,1..n]
wherem < n
, then the optimal solution for arrayB
is at most as good as inA
, trivially, since the same solution is feasible inA
. To use dynamic programming, it is therefore reasonable to ask: What is the relationship between the optimal subarray ofA[1..i,1..i]
and the optimal subarray ofA[1..i+1,1..i+1]
?