2

If I were to define the following array using the zero-fill initialization syntax on the stack:

int arr[ 10 ] = { 0 };

... is the run time constant or linear?

My assumption is that it's a linear run time -- my assumption is only targeting the fact that calloc must go over every byte to zero-fill it.

If you could also provide a why and not just it's order xxx that would be tremendous!

Jacob Pollack
  • 3,703
  • 1
  • 17
  • 39
  • 2
    "is the run time linear or constant?" Yes. It's linear in the length of the array, which is a constant. – Casey Aug 09 '13 at 23:14

2 Answers2

1

The runtime is linear in the array size.

To see why, here's a sample implementation of memset, which initializes an array to an arbitrary value. At the assembly-language level, this is no different than what goes on in your code.

void *memset(void *dst, int val, size_t count) {
    unsigned char *start = dst;
    for (size_t i = 0; i < count; i++)
        *start++ = value;
    return dst;
}

Of course, compilers will often use intrinsics to set multiple array elements at a time. Depending on the size of the array and things like alignment and padding, this might make the runtime over array length more like a staircase, with the step size based on the vector length. Over small differences in array size, this would effectively make the runtime constant, but the general pattern is still linear.

1''
  • 26,823
  • 32
  • 143
  • 200
  • `void *memset(void *dst, int val, size_t count)`... Also, `while (count--)` is broken if `count` is 0. –  Aug 09 '13 at 23:20
  • @H2CO3 Yes, it's a non-standard prototype, but it was the first example I found on a Google search, and the general idea is the same. I'll edit. – 1'' Aug 09 '13 at 23:21
  • Yeah, I see, thanks. (You need to convert to `char *` inside the function, though, because `void *` doesn't appreciate pointer arithmetic.) –  Aug 09 '13 at 23:22
  • @H2CO3, is the OP of this response correct with his analysis of it being linear time with respect to the length? I am waiting for up votes before I choose an answer to ensure anyone else with this question looks at a correct answer. I completely agree with this answer, it does back up my assumption in greater detail. – Jacob Pollack Aug 09 '13 at 23:28
  • 2
    @JacobPollack Yes, it's linear *with respect to the size of the array.* If you have a constant-size array, then that is *constant time* (read: `O(1)`). –  Aug 09 '13 at 23:29
  • @H2CO3, can you clarify what you mean by constant-size array? How would an initialization be non-constant? – Jacob Pollack Aug 09 '13 at 23:31
  • 1
    @JacobPollack `int arr[10]` - 10 is a constant, the size of the array will always be `10 * sizeof(int)`, no matter what. –  Aug 09 '13 at 23:32
  • @H2CO3, From what I am gathering from this response and the follow-ups, it's constant run time (read: O(1)) because the amount of bytes allocated to the array are known (hence 10 in my example). Is this a correct interpretation? – Jacob Pollack Aug 09 '13 at 23:40
  • @JacobPollack Yes, pretty much it is. –  Aug 09 '13 at 23:42
1

This is actually a tip of the ice berg question. What you are really asking is what is the order (Big Oh) of initializing an array. Essentially, the code is looping thru each element of the array and setting them to zero. You could write a for loop to do the same thing.

The Order of magnitude of that loop is O(n), that is, the time spent in the loop increases in proportion to the number of elements being initialized.

If the hardware supported an instruction that says to set all bytes from location X to Y to zero and that instruction worked in M instruction cycles and M never changed regardless of the number of bytes being set to zero, then that would be of order k, or O(k).

In general, O(k) is probably referred to as constant time and O(n) as linear.

JackCColeman
  • 3,777
  • 1
  • 15
  • 21