For example, given an integer array and its two consecutive sequence 's beginning position which are 'b1' and 'b2', furthermore provided with the position 'last' which indicates the second sequence's ending position. From array[b1] to array [b2-1] and from array [b2] to array[last] are both in order separately, how to merge them in place using O(n) time and O(1) space cost?
6 Answers
Kronrod's merge was the first published algorithm to do that. It goes roughly like this:
Split both parts of the array into blocks of size k=sqrt(n). Sort the blocks using their first elements as the basis for comparison. This can be done in sqrt(n)^2=O(n) by selection sort. The key property of selection sort here is that it has constant moves per block, so only #comparisons is square.
After this phase, for each element A[i]
in the array there are at most k-1
elements "wrongly sorted" below it, that is elements at positions j
<i
such that A[j]>A[i]
. These are (possibly) in the closest block below it that comes from the other merged part. Note that the first element of the block (and all other blocks below it) are already properly sorted relative to A[i]
because of the blocks being sorted on their first elements. This is why the second phase works, i.e. achieves the fully sorted array:
Now merge the first block with the second, then second with the third, etc., using the last 2 blocks as temporary space for the output of the merge. This will scramble the contents of the last two blocks but in the last phase they (together with the preceding block) can be sorted by selection sort in sqrt(n)^2=O(n) time.

- 43,216
- 11
- 77
- 90
-
-
Right, I misunderstood the algorithm that you described :) Pretty neat! – Michael Nett Oct 28 '11 at 10:12
-
It's a pretty neat method and well explained in a single paragraph. Thanks Rafal! – Yo Hsiao Oct 07 '14 at 19:04
-
Seems to be `O(n sqrt(n))`. You merge the blocks `sqrt(n)` times and every time the array is larger by `sqrt(n)`, since you merge first with second then that result with third and so on. If you add complexity of those merges you get `O(sqrt(n)+2*sqrt(n)+...+sqrt(n)*sqrt(n))`, which equals `O(n sqrt(n))`. – 2501 Jan 10 '15 at 21:20
-
@2501 In the merge phase, you only need to merge the upper half of the previous result (that is, one block) with the next block. – Rafał Dowgird Jan 16 '15 at 20:29
-
-
1@RafałDowgird Arrays that aren't divisible by sqrt(n) really complicate the algorithm though. – 2501 Jan 17 '15 at 09:00
-
2@2501 Very true. Kronrod's isn't really a practical algorithm. I don't think any of the in-place merges are. Disclaimer: I might have missed the newest research on this. – Rafał Dowgird Jan 18 '15 at 19:26
-
@RafałDowgird Kronrod's algorithm fails if there aren't enough unique elements in the array, since the sorting of blocks isn't stable (which is required for the correct execution of the algorithm) – dhruvbird Mar 28 '15 at 05:50
-
This algorithm is explained in the third volume of TAOCP in the solution of one of the exercises of section 5.2.4. – Gil Vegliach Sep 23 '15 at 07:59
This is by no means a simple problem It is possible, but rarely done in practice because it's so much more complicated than a standard merge using N-scratch space. Huang and Langston's paper has been around since the late 80's, though practical implementations didn't really surface until later. Earlier, L. Trabb-Prado's paper in 1977 predates Huang and Langston significantly, but I'm challanged to find the exact text that paper; only references abound.
An excellent later publication, Asymptotically efficient in-place merging (1995) by Geert, Katajainenb, and Pasanen is a good coverage of multiple algorithms, and references Trabb-Prado's contributions to the subject.
-
2
-
-
-
[Stable Sorting and Merging with Optimal Time and Space Bounds](http://i.stanford.edu/pub/cstr/reports/cs/tr/74/470/CS-TR-74-470.pdf) Trabb Pardo, Luis I.. (The name seems to be spelled differently quite often.) – greybeard Dec 23 '16 at 20:48
-
There are such things as true in-place merges, but they are not straightforward enough that anybody is going to independently reinvent them in the middle of an interview - there have been papers describing a succession of pretty complex algorithms for this for years. One is Practical In-Place Merging, by Huang and Langston, CACM March 1988. The starting idea for this is to divide the data of length n into blocks of size sqrt(n), and use one block, filled with the largest elements of the data, to provide buffer space used in merging the others. The introduction to that paper says
"Given two sorted lists whose lengths sum to n, the obvious methods for merging in O(n) steps require a linear amount of extra memory as well. On the other hand, it is easy to merge in place using only a constant amount of additional space by heap-sorting, but at a cost of O(n log n) time"
Hence I claim that true merging in place can be done but is non-obvious.

- 19,301
- 2
- 19
- 25
Though it is not possible entirely in O(n)
time, I have a proposition to do it faster than O(n^2)
. I use only O(1)
space which is temp in my code. I am sure it should run better than O(n^2)
.
private static int[] mergeSortedArrays(int[] a1, int[] a2) {
int i = 0, j = 0;
while (a1[i] != Integer.MIN_VALUE) {
if (a1[i] > a2[j]) {
int temp = a1[i];
a1[i] = a2[j];
a2[j] = temp;
for (int k = 1; k < a2.length; k++) {
if (a2[k - 1] > a2[k]) {
temp = a2[k - 1];
a2[k - 1] = a2[k];
a2[k] = temp;
}
}
}
i++;
}
while(j < a2.length){
a1[i++] = a2[j++];
}
return a1;
}
-
I have assumed that none of the arrays would have Integer.MIN_VALUE since my code fails in that scenario. I have added Integer.MIN_VALUE in a1 array after the integers, so that we would have flexibility adding the a2 elements once a1 is finished processing to a1 itself and finally returning the a1 array with merged elements of a2. – Hari Mar 05 '16 at 05:00
Here is O(n-1) Memory (n+1)
/**
* Created by deian on 2016-12-22.
* We just need track the two smallest numbers
*/
public class Merge {
public static void swap(int[] a, int i1, int i2) {
int t = a[i1];
a[i1] = a[i2];
a[i2] = t;
}
public static void merge(int[] a) {
// i1 and i2 - always point to the smallest known numbers
// it would works as well with two m and n sized arrays
int i1 = 0;
int i2 = a.length / 2;
System.out.printf(" %s, i(%d,%d) \n", Arrays.toString(a), i1, i2);
for (int di = 0; di < a.length - 1; di++) {
int ni;
int oi1 = i1; int oi2 = i2;
if (a[i1] > a[i2]) {
ni = i2; i2++;
if (i2 >= a.length) { i2--; }
} else {
ni = i1; i1++;
if (i1 >= i2) { i1 = di; }
}
if (di == i1) { i1 = ni; }
swap(a, di, ni);
System.out.printf("#%d: %s, i(%d,%d)s(%d>%d)i(%d,%d) \n", di + 1, Arrays.toString(a), oi1, oi2, ni, di, i1, i2);
}
System.out.printf(" %s\n", Arrays.toString(a));
}
public static void main(String[] args) {
// int[] a = new int[]{1, 3, 6, 8, -5, -2, 3, 8};
// int[] a = new int[]{1, 3, 6, 8, -5, 2, 3, 8};
// int[] a = new int[]{1, 5, 6, 8, -5, 2, 3, 4};
// int[] a = new int[]{1, 5, 6, 8, -5, -2, -1, 4};
// int[] a = new int[]{ 1, 2, 3, 4, 5, 6, 7, 8};
// int[] a = new int[]{5, 6, 7, 8, 1, 2, 3, 4};
int[] a = new int[]{1, 3, 5, 7, 2, 4, 6, 8};
merge(a);
}
}

- 1,237
- 15
- 31
I had an interview (with a very important company) a couple of hours ago and I was asked that. There is the answer in Java
public static void main(String[] args) {
int A[] = { 1, 3, 5, 6, 9 };
int B[] = new int[12];
B[0] = 3;
B[1] = 6;
B[2] = 8;
B[3] = 10;
B[4] = 11;
B[5] = 13;
B[6] = 15;
mergeInB(A, B, 7);
for (int n : B)
System.out.print(n + " ");
}
/**
* @param a
* @param b - it will be modified
* @param j = length of b
*/
public static void mergeInB(int[] a, int[] b, int j) {
int i = a.length - 1, k;
j --;
for (k = b.length-1; k >= 0; k--) {
if (i >= 0 && j >= 0) {
if (a[i] > b[j]) {
b[k] = a[i];
i --;
}
else
{
b[k] = b[j];
j --;
}
}
else break;
}
while(i>=0 && k >=0) {
b[k] = a[i];
k --;
i --;
}
while(j>= 0 && k >=0) {
b[k] = b[j];
j--;
k--;
}
}

- 1
- 1