What is the running time of declaring an array of size n in Java? I suppose this would depend on whether the memory is zero'ed out on garbage collection (in which case it could be O(1) ) or on initialization (in which case it'd have to be O(n) ).
-
3I would think that would be JVM dependend – Yet Another Geek Apr 12 '11 at 19:59
-
1I guess the tongue-in-cheek, pedantic answer is that it's `O(1)` because even if it's `O(n)`, `n` is bounded by `2^31` for Java arrays and thus will be asymptotically lower than some large constant. – Mark Peters Apr 12 '11 at 20:33
-
7@Mark, in this case, every computation is O(1) space, since the number of atoms on earth is finite. :P – amit Apr 12 '11 at 20:41
-
2@amit, well, put another way, since the computer has finite memory, any *terminating* program, would run in `O(1)`. – aioobe Apr 12 '11 at 20:50
-
@amit: Darn, that answer could have saved me a lot of headaches in uni! Yeah obviously that was a ridiculous comment. – Mark Peters Apr 12 '11 at 21:04
-
Thank you for all the excellent information. I wish I could accept all your answers! – Mala Apr 13 '11 at 06:34
4 Answers
It's O(n)
. Consider this simple program:
public class ArrayTest {
public static void main(String[] args) {
int[] var = new int[5];
}
}
The bytecode generated is:
Compiled from "ArrayTest.java"
public class ArrayTest extends java.lang.Object{
public ArrayTest();
Code:
0: aload_0
1: invokespecial #1; //Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_5
1: newarray int
3: astore_1
4: return
}
The instruction to take a look at is the newarray
instruction (just search for newarray
). From the VM Spec:
A new array whose components are of type atype and of length count is allocated from the garbage-collected heap. A reference arrayref to this new array object is pushed into the operand stack. Each of the elements of the new array is initialized to the default initial value for the type of the array (§2.5.1).
Since each element is being initialized, it would take O(n)
time.
EDIT
Looking at the link amit provided, it is possible to implement array-initialization with a default value, in constant time. So I guess it ultimately depends on the JVM. You could do some rough benchmarking to see if this is the case.

- 94,126
- 40
- 223
- 295
-
4amit posted a link to [@Jonathan's answer](http://stackoverflow.com/questions/5640850/java-whats-the-big-o-time-of-declaring-an-array-of-size-n/5640892#5640892) for how you could do lazy initialization within the JVM code. I don't see how it would break the spec to do so, as long as from the POV of the array interface (reads and writes) the array is initialized to the correct values. – Mark Peters Apr 12 '11 at 20:21
-
1@Mark Peters, interesting. I guess it would depend on whether that particular JVM implements it using the lazy-initialization method or not. Pretty cool algorithm! – Vivin Paliath Apr 12 '11 at 20:25
-
Yeah it is. The only worry I'd have would be whether it's possible to do this and also allow native methods to interact with an array via JNI. – Mark Peters Apr 12 '11 at 20:28
-
True. If JNI provides direct access to the array, then this might be a problem because the locations will contain garbage data. – Vivin Paliath Apr 12 '11 at 20:30
A small none professional benchmark on JRE1.6:
public static void main(String[] args) {
long start = System.nanoTime();
int[] x = new int[50];
long smallArray = System.nanoTime();
int[] m = new int[1000000];
long bigArray = System.nanoTime();
System.out.println("big:" + new Long( bigArray - smallArray));
System.out.println("small:" + new Long( smallArray - start));
}
gave the following result:
big:6133612
small:6159
so I assume O(n). of course, it is not enough to be sure, but it's a hint.
-
1
-
@glowcoder: for the -1: I agree, microbenchmarking is the devil. this is why I indicated nothing is conclusive and this is very non-proffesional, just wanted to give a hint to continue discussion. thanks for the +10. – amit Apr 12 '11 at 20:46
-
The biggest array is 2000 times bigger, while it takes only 1000 times more time. So, are you sure it is O(n)? I really don't know anything about big-o, but to me it looks like: O(n/2) – Martijn Courteaux Apr 12 '11 at 21:36
-
4
-
@Martjin: O(n/2) = O(n). Also you can assume there is some overhead, common to both initializations, regardless of the size... and as I said, this 'benchmark' proves nothing, it just suggests that the size of the array has an affect on initialization time, and the O(n) is just an assumption. – amit Apr 12 '11 at 21:40
-
@amit, the code is executed in interpret mode w/ zero optimizations, whatsoever. Writing microbenchmarks in java is hard – bestsss Apr 13 '11 at 11:47
-
@MartijnCourteaux to be fair, you can draw pretty much any order function through two points just as easily as a line ;) – Mala Jul 09 '12 at 23:32
-
the time maybe only caused by finding and allocating more space. and did you test on every kinds of JVM. and a given JVM under every status? – Bruce Zu Aug 17 '16 at 00:54
Let's just test it.
class ArrayAlloc {
static long alloc(int n) {
long start = System.nanoTime();
long[] var = new long[n];
long total = System.nanoTime() - start;
var[n/2] = 8;
return total;
}
public static void main(String[] args) {
for(int i=1; i<100000000; i+=1000000) {
System.out.println(i + "," + alloc(i));
}
}
}
And the results on my linux laptop (i7-4600M @ 2.90GHz):
So it clearly looks like O(n)
, but it also looks like it switches to a more efficient method at around 5 million elements.

- 6,419
- 1
- 35
- 45
I am pretty sure that it is O(n), as the memory is initialized when the array is allocated. It should not be higher than O(n), and I can see no way to make it less than O(n), so that seems the only option.
To elaborate further, Java initializes arrays on allocation. There is no way to zero a region of memory without walking across it, and the size of the region dictates the number of instructions. Therefore, the lower bound is O(n). Also, it would make no sense to use a zeroing algorithm slower than linear, since there is a linear solution, so the upper bound must be O(n). Therefore, O(n) is the only answer that makes sense.
Just for fun, though, imagine a weird piece of hardware where the OS has control over the power to individual regions of memory and may zero a region by flipping the power off and then on. This seems like it would be O(1). But a region can only be so large before the utility disappears (wouldn't want to lose everything), so asking to zero a region will still be O(n) with a large divisor.

- 13,354
- 4
- 36
- 32
-
6here is a way to do it better (in terms of big O) http://eli.thegreenplace.net/2008/08/23/initializing-an-array-in-constant-time/ - I have no idea how the jvm actually works – amit Apr 12 '11 at 20:05
-
3the JVM may decide not to nil the memory if can be proven it's first written/then read. Array.clone. System.arraycopy, Arrays.copyOf and so on. – bestsss Apr 12 '11 at 21:00
-
@amit That's pretty cool. I doubt the JVM does that, due to the large amount of overhead, but still cool. @bestsss Good point. I was assuming the general case of declaring `type[len] array;` – Jonathan Apr 12 '11 at 21:07
-
You could use a DMA controller for this purpose. It's still O(n), but the CPU can go do something else in the meantime. – Matt K Apr 12 '11 at 21:10
-
1@amit, the JVM tries to remove even boundaries checks to arrays, the proposed method is more inefficient (besides microbenchmarks) for general uses. – bestsss Apr 12 '11 at 23:31
-