4

This is a general question, which could be applicable to any given language like C,C++,Java etc.
I figured any way you implement it, you can't get more efficient than using 2 loops, which gives an efficiency of n^2.

for(i=0;i<n;i++)
 for(j=0;j<n;j++)
  a[i][j]=1;

I was asked this at an interview recently, and couldn't think of anything more efficient. All I got from the interviewer was that I could use recursion or convert the 2D array to a linked list to make it more efficient than n^2. Anyone know if this is possible, and if yes, how? At least theoretically, if not practically.

edit: The actual question gives me the coordinates of two cells, and I have to fill the paths taken by all possible shortest routes with 1. eg, if i have a 5x5 matrix, and my two coordinates are (2,0) and (3,3), I'd have to fill:
(2,0)(2,1)(2,2)(2,3) (3,0)(3,1)(3,2)(3,3)
while leaving the rest of the cells as they were.

Dominic Maben
  • 45
  • 1
  • 5
  • It's possible to use a different data structure, which can be initialized quicker at the expense of other metrics (e.g. slower lookup, less space-efficient). Is that allowed or do you have to end up with a plain 2D array? –  Jan 19 '13 at 16:06

3 Answers3

6

It depends on what you mean. If the question is about plain arrays, meaning a sequence of contiguos memory locations and for initialization you mean putting a value in every memory location of this "matrix" then the answer is no, better than O(n*m) is not possible and we can prove it:

Let us assume that algorithm fill(A[n][m], init_val) is correct(i.e. fills all the memory locations of A) has complexity g(n,m) which is less than O(n*m)(meaning g(n,m) is not part of Ω(n*m)), then for big enough n and m we will have that g(n,m) < n*m = number of memory locations. Since filling a memory location requires one operation the algorithm fill can fill at most g(n,m) locations[actually half because it must also do at least an operation to "select" a different memory location, except if the hardware provides a combined operation] which is strictly less than n*m, which imply that the algorithm fill is not correct.

The same applies if filling k memory locations takes constant time, you simply have to choose bigger n and m values.

As other already suggested you can use other data-structures to avoid the O(n^2) initialization time. amit suggestion uses some kind of lazy-evaluation, which allows you to not initialize the array at all but do it only when you access the elements.

Note that this removes the Ω(n^2) cost at the beginning, but requires more complex operations to access the array's elements and also requires more memory.

It is not clear what your interviewer meant: converting an array into a linked-list requires Ω(L) time(where L is the length of the array), so simply converting the whole matrix into a linked-list would require Ω(n^2) time plus the real initialization. Using recursion does not help at all, you simply end up in recurrences such as T(n) = 2T(n/2) + O(1) which would again result in no benefit for the asymptotic complexity.

As a general rule all algorithms have to scan at least all of their input, except it they have some form of knowledge beforehand(e.g. elements are sorted). In your case the space to scan is Θ(n^2) and thus every algorithm that wants to fill it must be at least Ω(n^2). Anything with less than this complexity either make some assumption(e.g. the memory contains the initializer value by default -> O(1)), or solves a different problem(e.g. use lazy arrays, or other data structures).

Bakuriu
  • 98,325
  • 22
  • 197
  • 231
  • Fantastic answer, thank you Bakuriu. I've edited my question a bit, added details of the actual question, do you think that would change anything? – Dominic Maben Jan 19 '13 at 16:57
  • 1
    @DominicMaben Even with your edit the problem is the same. Was there some kind of rule to follow to fill the matrix? Otherwise the problem is `Ω(n^2)`. – Bakuriu Jan 19 '13 at 17:01
  • Nope. Its precisely what I've specified. Thank you Bakuriu, this confirms what I thought. – Dominic Maben Jan 19 '13 at 17:23
4

You can initialize an array in O(1), but it consumes triple the amount of space, and extra "work" for each element access in the matrix.
Since in practice, a matrix is a 1D array in memory, the same principles still hold.

The page describes how it can be done in details.

amit
  • 175,853
  • 27
  • 231
  • 333
0

When you fill a 2d-array with same element, if you really will fill every element at least n^2 operations should be made.(given 2-d array is n*n).

The only way to decrease complexity is use a parallel programming approach.For example, given n processor, first input is is assigned the first row of the array.This is n operations. Then each processor Pi assigns array[i] of row k to array[i] of row k+1 for k=0 to n-1. This will be again O(n) since we have n processor working parallel.

If you really want to implement this approach you can look for free parallel programming environments like OpenMPI and mpich

woryzower
  • 956
  • 3
  • 15
  • 22
  • 1
    The problem here is that, practically speaking, you can have at most a constant number of processor, which is exactly like filling `k` memory locations with a single operation and thus the time would still be quadratic. – Bakuriu Jan 19 '13 at 16:36
  • Yes you're right, practically the number of processors we can have is constant but that's a theoretically correct approach that solves the problem with a complexity better than O(n^2) – woryzower Jan 19 '13 at 16:44