0

Taken from coding interview:

How would you implement an infinite binary counter with increment in O(1) time complexity?

I thought to calculate the first and second position of the rightmost 0, but I'm not sure how to implement this.

"Infinite counter" means you can increment an infinite number of times (greater than MAX_INT).

Elad Benda
  • 35,076
  • 87
  • 265
  • 471

4 Answers4

7

For a binary counter...

If you want to keep the counter in a "normal" bit pattern, you can't, fundamentally - at least not for always O(1) rather than amortized O(1).

If it's an infinite counter, it can have arbitrarily many bits. That means you can have a number of N bits, all of which are 1. Incrementing that counter means setting all those bits to 0, which can reasonably be assumed to be an O(N) operation.

The only reason we can consider an increment to be O(1) in "normal" computing is that usually deal with fixed-size types, where we can say (for example) "At most 32 bits will need to change - that's a constant, so it's conceivably an O(1) operation."

For just a counter...

On the other hand, if you just want to be able to increment in O(1) time, you have infinite memory, and you don't care how long it takes to recover the value, you can do it, just by effectively using a linked list whose length is the counter size.

For example, in C#:

public DodgySolution
{
    public static DodgySolution Zero = new DodgySolution(null);

    private DodgySolution tail;

    private DodgySolution(DodgySolution tail)
    {
        this.tail = tail;
    }

    // This bit is O(1)
    public DodgySolution Increment()
    {
        return new DodgySolution(this);
    }

    // This bit isn't...
    public BigInteger ToBigInteger()
    {
        return tail == null ? BigInteger.Zero
                            : BigInteger.One + tail.ToBigInteger();
    }
}

Even this assumes that a reference assignment is O(1) though - which could become tricky with an infinite number of objects...

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • Wouldn't it be more like O(log N)? After all, the larger the number, the less work you have to do _in proportion_ to the "value" of the number. – voithos Feb 21 '13 at 21:05
  • you can keep the postion of the rightmost `1` and rightmost `0` and do bit manipulation instead of traversing the bits – Elad Benda Feb 21 '13 at 21:05
  • @voithos: It depends whether your N is the *value* of the number or its *size* in bits. – Jon Skeet Feb 21 '13 at 21:06
  • @EladBenda: How does "do bit manipulation" allow you to flip an arbitrary number of bits in O(1)? It doesn't matter what you keep track of - you still need to be able to change an arbitrary number of bits, I suspect. – Jon Skeet Feb 21 '13 at 21:06
  • @EladBenda: Bit manipulation is only O(1) when the bit width is less than or equal to your instruction set size. – voithos Feb 21 '13 at 21:08
  • @EladBenda: See my edit though. It all depends on what you need to be O(1)... – Jon Skeet Feb 21 '13 at 21:11
  • @JonSkeet so you basically stole my comment, nice. – Sten Petrov Feb 21 '13 at 21:15
  • Average case increments are O(1) on counters that use typical representations like base2 that use O(log(n)) memory. You don't need a linked list that grows as O(n). – CodesInChaos Feb 21 '13 at 21:24
  • @StenPetrov: I hadn't seen your comment until just now. I saw an *earlier* version of it which didn't mention the linked list, but I assume you then edited it. (Either that, or I just saw the summary in the notification windows - that's another possibility.) Honestly, I didn't see that part until you pointed it out just now. – Jon Skeet Feb 21 '13 at 21:24
  • @CodesInChaos: You don't if you only care about *average* case. But this is *always* O(1), if we assume that the allocation part is always O(1). It's not like this is a realistic situation to start with :) – Jon Skeet Feb 21 '13 at 21:26
  • @JonSkeet I'm just teasing. There aren't that many ways of making "infinite" counter increment in O(1) on a Neumann machine – Sten Petrov Feb 21 '13 at 21:27
  • @StenPetrov: Goodo - I've had similar comments which *weren't* in jest :) Still, I'll see if I can think of any other entertaining approaches... variety's always good :) – Jon Skeet Feb 21 '13 at 21:28
  • see me answer below. Would appreciate comments – Elad Benda Feb 21 '13 at 22:04
  • @JonSkeet please view my answer below – Elad Benda Feb 21 '13 at 22:40
  • That's a unary counter, not a binary one. – starblue Feb 23 '13 at 12:19
  • @starblue: True - I had neglected the "binary" part. Will edit to indicate this. For the "binary" part, I stand by my first answer :) – Jon Skeet Feb 23 '13 at 12:21
3
  • Use some kind of array storage with a doubling strategy. This means allocations are amortized O(1)
    A linked list should work as well.
  • Use trivial schoolbook addition. Carries get exponentially rare for higher bits. Average cost for carries is 1+0.5+0.25+... = 2 which O(1)

So a straight forward implementation has amortized O(1) performance. The only issue is that you need mutable storage.

When looking at individual increment operations of a number n, then the average time is O(1) but the worst case is O(log(n)). Memory usage is O(log(n)).

var counter=new List<bool>{false};

void Inc()
{
  while(counter[i])
  {
      counter[i]=false;
      i++;
  }
  if(i==counter.Length)
    counter.Add(true);
  else
    counter[i]=true;
}
CodesInChaos
  • 106,488
  • 23
  • 218
  • 262
  • You can convert amortized into worst-case complexity by doing operations lazily. Here you could keep a stack of carries to do, if you do one or two for each increment that should suffice. – starblue Feb 23 '13 at 12:57
2

If the question is only asking for increment of O(1) counter without any other limitations your counter can be implemented as a linked list of numbers and the sum of items is the value of your counter.

Incrementing will be equivalent to adding a 1 to the last item or adding a new item=1 if the value before is greater than (Max-1).

Since you'll always check 2 items in your list at the most then incrementing will be of O(1)

Just don't try doing other arithmentic with your shiny new counter :D

Sten Petrov
  • 10,943
  • 1
  • 41
  • 61
-1

My try:

we keep an aggregation on consecutive 1 or 0.

meaning 111000111 is <1,0> <3,1> <3,0> <3,1>

I can represent this with the following DS:

List of Node { digit : bool, counter: long}

1) if the first bulk is of 1's. it turns to bulk of 0`s and turn the very next 0 to 1.

we now check if we can aggregate bulks of 1's.

2) if the first bulk is of 0's, we make the first digit 1. and see if we can aggregate 1's.

example A:

meaning 111000111 is <1,0> <3,1> <3,0> <3,1>

reading: three 1 digits, three 0 digits, three 1 digits, one 0 digits

increment()

<1,0> <3,1> <2,0> <1,1> <3,0>

example B:

<1,0> <3,1> <1,0> <3,1>

increment()

<1,0> <3,1> <1,1> <3,0>

aggregation:

<1,0> <4,1> <3,0>

There will always be const number of changes (up to the right most 0 digit)

and turnning bulk of 1s is just toggeling the boolean member. which is constant

Elad Benda
  • 35,076
  • 87
  • 265
  • 471
  • How big is the "bulk"? If that gets bigger over time, it's *not* a constant time algorithm. (I'm not sure I really understand what this does, to be honest. It would be good if you could show the bit patterns for 0-10.) – Jon Skeet Feb 21 '13 at 22:53
  • @JonSkeet If you're `not sure you understand what this does` how can you predict `If that gets bigger over time, it's not a constant time algorithm`? I added an example to my answer. Can you please explain why big bulk will be a problem? as it merely represented by – Elad Benda Feb 22 '13 at 09:20
  • The previous version of the answer had a loop in a way which didn't look like it would end up as O(1). I'll consider this further - if you could write it up as *actual code* it might be easier to think about where it would end up being long (or simply prove that it won't). My *suspicion* is that the aggregation part requires you to look over a potentially non-constant number of chunks. – Jon Skeet Feb 22 '13 at 09:25
  • @JonSkeet aggregation in each step involves 3 nodes at most. The right most 0 will turn into 1 and the digits to its left won't change. We will have to check for aggregation only the node where 0 became 1 and it's prev and next nodes. No more than that. By inducation all the other are allready aggregated. – Elad Benda Feb 22 '13 at 10:27
  • Okay, I suggest you try to implement it. It sounds at least interesting enough to look at in more detail. – Jon Skeet Feb 22 '13 at 10:33
  • @EladBenda - How would this algorithm handle 1001++? It seems to me it would go from `<1,0><1,1><2,0><1,1>` to `<1,0><1,1><2,1><1,0>`, then aggregate that into `<1,0><3,1><1,0>`, which is 1110, which is wrong. You need to *split* the `<2,0>` into `<1,0><1,1>` in order for this to behave correctly and produce `<1,0><1,1><1,0><1,1>`. If I'm misapplying it, please add clarification to explain this example, including the binary-to-node conversion – Bobson Feb 22 '13 at 18:46
  • @Bobson `if the first bulk is of 1's. it turns to bulk of 0`s and turn the very next 0 to 1.` meaning: it would go from <1,0><1,1><2,0><1,1> to <1,0><1,1><1,0><1,1><1,0> which is 1010 – Elad Benda Feb 22 '13 at 19:47
  • @EladBenda - I think I see... You're performing three operations in that case. A decrement of (n-1), a swap of (n), and an insertion of `<1,1>` between them (n = last bulk). What about 10011? `<1,0><1,1><2,0><2,1>` -> `<1,0><1,1><1,0><1,1><2,0>`. Seems to work. So it's up to three operations either way. 0: `Increment(n)` or `Append(n+1)`; 1: `Decrement(n-1)`, `Flip(n)`, and `InsertBefore(n)`. – Bobson Feb 22 '13 at 20:06
  • @EladBenda - If you edit those clarifications into the answer, I'll change to an upvote. I'd do it now, but the vote is locked. – Bobson Feb 26 '13 at 17:23