Given the following merge sort algorithm :
mergesort(A,p,r)
if (r <= l) return //constant amount of time
int m = (p+r)/2 //constant amount of time
mergesort(A, p, q) // these two calls will decide the
mergesort(A, q+1, r) // O(logn) factor inside O(n * logn) right?
merge(A, p, q, r) lets dwell further
merge(a,p,q,r){
n1 = q-p+1 //constant time
n2 = r-q //constant time
// Let L[1...n1+1] and R[1...n2+1] be new arrays // idk , lets say constant
for i,j in L[],R[]
L[i] = A[p+i-1]
R[j] = A[q+j] // shouldn't this take time varying on size of array?
// also extra space too?
i=1 j =1 // constant time
for k = p to r // everything below this will contribute to O(n)
// inside O(n*logn) amirite?
if L[i]<=R[j]
A[k] = L[i]
i++
else A[k] = R[j]
j++
How come we are estimating O(nlogn)
time complexity for it , keeping in mind that there are left and right arrays being created to be merged back?
And how come space complexity is O(n)
only if extra size is being used? Won't the two of them be increased by n
, because filling up array takes O(n)
and L[]
and R[]
are being created at each recursion step.