20

According to Wikipedia, accessing any single element in an array takes constant time as only one operation has to be performed to locate it.

To me, what happens behind the scenes probably looks something like this:

a) The search is done linearly (e.g. I want to access element 5. I begin the search at index 0, if it's not equal to 5, I go to index 1 etc.) This is O(n) -- where n is the length of the array

b) If the array is stored as a B-tree, this would give O(log n)

I see no other approach.

Can someone please explain why and how this is done in O(1)?

conectionist
  • 2,694
  • 6
  • 28
  • 50
  • 3
    You can calculate the offset of the element based on the index. That calculation has a constant time, something like `size_of_an_element * (index - 1)` –  Apr 16 '14 at 08:17
  • Element 5 is at index 5. That's what "index" means when applied to an array. You don't have to scan indices 0 through 4. – librik Apr 16 '14 at 08:19

5 Answers5

41

An array starts at a specific memory address start. Each element occupies the same amount of bytes element_size. The array elements are located one after another in the memory from the start address on. So you can calculate the memory address of the element i with start + i * element_size. This computation is independent of the array size and is therefor O(1).

SpaceTrucker
  • 13,377
  • 6
  • 60
  • 99
  • But the computation of this memory address which involves a multiplication and an addition operation would itself scale with `n` right? Hence it's not really `O(1)`? – pikachuchameleon Jul 12 '19 at 11:21
  • 3
    @pikachuchameleon Strictly speaking you are correct. But for practical computer systems with fixed-width registers of size `k` (for example 32 or 64 bit) `n` is usually representable with `k` bits (`n<2^k`). So in this case the multiplication and addition are done at constant time. – SpaceTrucker Jul 12 '19 at 14:30
  • very good answer, short and clear. thanks – გენო მუმლაძე Feb 22 '22 at 14:39
6

In theory elements of an array are of the same known size and they are located in a continuous part of memory so if beginning of the array is located at A memory address if you want to access any element you have to compute its address like this:

A + item_size*index so this is a constant time operation.

almendar
  • 1,823
  • 13
  • 23
5

Accessing a single element is NOT finding an element whose value is x.

Accessing an element i means getting the element at the i'th position of the array.

This is done in O(1) because it is pretty simple (constant number of math calculations) where the element is located given the index, the beginning of the array and the size of each element.

RAM memory offers a constant time (or to be more exact, a bounded time) to access each address in the RAM, and since finding the address is O(1), and retrieving the element in it is also O(1), it gives you total of O(1).

Finding if an element whose value is x is actually Omega(n) problem, unless there is some more information on the array (sorted, for example).

amit
  • 175,853
  • 27
  • 231
  • 333
  • It's big-O, not big-Ω `;-)`. – rubenvb Apr 16 '14 at 08:30
  • @rubenvb No, it's `Omega(n)` - this is a problem that cannot be done better than `c*n` (for some constant `c`) , same as 'sorting is `Omega(nlogn)` problem. Any solution to find an element will be `Omega(n)`. We can think of (extremely inefficient) solutions that won't be `O(n)`. – amit Apr 16 '14 at 08:36
2

Usually an array is organized as a continuous block of memory where each position can be accessed by means of an index calculation. This index calculation cannot be done in constant time for arrays of arbitrary size, but for reasons of addressable space, the numbers involved are bounded by machine word size, so an assumption of constant time is justified.

Codor
  • 17,447
  • 9
  • 29
  • 56
  • `This index calculation cannot be done in constant time for arrays of arbitrary size` -> What do you mean by this? – rubenvb Apr 16 '14 at 08:29
  • 1
    I mean that in every actual computer (or say computing environment), there is an upper bound on the addressable space. In the 80s, usually 64k or 128k (by memory extensions and bank switching) was typical, which later (older Windows machines) was 4gb. Today, memory space addressable by 64 bits is usual. To put it all in a nutshell: Actual adressable space is usually bounded by an architecture-dependent constant, so the time for index calculation is also bounded by constant. Some computational models take this into accout, others don't. – Codor Apr 16 '14 at 08:38
0

If you have array of Int's. Each int is 32 bit's in java . If you have for example 10 integers in java it means you allocate memory of 320 bits. Then computer know

0 - index is at memory for example - 39200

last index is at memory 39200 + total memory of your array = 39200+320= 39520

So then if you want to access index 3. Then it is 39200 + 32*3 = 39296.

It all depends how much memory your object takes which you store in array. Just remember arrays take whole block of memory.

Miljan Rakita
  • 1,433
  • 4
  • 23
  • 44
  • last element's index will be 39200 + 9 * 32 + 1 = 39 489. Generally, if you want to access to`n\`th` element value the computer will read `size_of_type_of_array` bits starting from `array_offset_in_memory + (n - 1) * size_of_type_of_array + 1`. – nickolay Aug 17 '19 at 13:44