0

Assume that I have an array A = {a, b, c, d, e, f, g, h........} and Q queries. in each query I will be asked to do one of the following operation:

  1. 1 i j -> increase i the element by 1 and decrease j the element by one
  2. 2 x -> tell the number of elements of the array which are less than x

if there was no update operation I could have done this by lower bound. I can still do it by sorting the array and finding the lower bound but complexity will be too high since the size of array A and Q can be both 10^5. is there any faster algorithm or way to do this?

Julian E.
  • 4,687
  • 6
  • 32
  • 49
Swatak
  • 33
  • 2

2 Answers2

0

The simplest way is to use std::count_if.

What complexity bound do you have to meet? 10^5^2 is still only 10^10.

If you have to do better than that, I suspect you have to have a "value" which has back pointers to the "index", and an "index" which is a pointer to the value. Sort the values initially, and then when you update, move the value to the right point. (Probably best to see if the value needs to move at all before searching).

Then the query is still a lower bound operation.

0

Once you sort the array (O(n log n) complexity), a query "LESS(X)" will run in log n time since you can use binary search. Once you know that element X is found (or the next largest element in A is found) at position k-th, you know that k is your answer (k elements are less than X).

The (i, j) command implies a partial reorder of the array between the element which is immediately less than min(A[i]+1, A[j]-1) and the one which is immediately after max(A[i], A[j]). These you find both in log n, worst case log n + n, time: this is close to the worst case:

k    0  1  2  3  4  5  6  7  8  9           command: (4, 5)
v    7  14 14 15 15 15 16 16 16 18
                 ^  ^
       becomes 16    becomes 14 -- does it go before 3 or before 1?

The re-sort is then worst case n, since your array is already almost sorted except for two elements, which means you'll do well by using two runs of insertion sort.

So with m update queries and q simple queries you can expect to have

n log n + m*2*(log n + 2*n) + q * log n

complexity. Average case (no pathological arrays, reasonable sparseness, no pathological updates, (j-i) = d << n) will be

( n + 2m + q ) * log n + 2m*d

which is linearithmic. With n = m = q = 10^5, you get an overall complexity which is still below 10^7 unless you've got pathological arrays and ad hoc queries, in which case the complexity should be quadratic (or maybe even cubic; I haven't examined it closely).

In a real world scenario, you can also conceivably employ some tricks. Remember the last values of the modified indexes of i and j, and the last location query k. This costs little. Now on the next query, chances are that you will be able to use one of the three values to prime your binary search and shave some time.

Community
  • 1
  • 1
LSerni
  • 55,617
  • 10
  • 65
  • 107