1

How can I turn this code into a generator function? Or can I do it in a other way avoiding reading all data into memory? The problem right now is that my memory gets full. I get KILLED after a long time when executing the code.

Code:

data = [3,4,3,1,2]

def convert(data):
    for index in range(len(data)):
        if data[index] == 0:
            data[index] = 6
            data.append(8)
        elif data[index] == 1:
            data[index] = 0
        elif data[index] == 2:
            data[index] = 1
        elif data[index] == 3:
            data[index] = 2
        elif data[index] == 4:
            data[index] = 3
        elif data[index] == 5:
            data[index] = 4
        elif data[index] == 6:
            data[index] = 5
        elif data[index] == 7:
            data[index] = 6
        elif data[index] == 8:
            data[index] = 7

    return data

for i in range(256):
    output = convert(data)
    print(len(output))

Output:

266396864
290566743
316430103
346477329
376199930
412595447
447983143
490587171
534155549
582826967
637044072
692630033
759072776
824183073
903182618
982138692
1073414138
1171199621
1275457000
1396116848
1516813106
Killed
petezurich
  • 9,280
  • 9
  • 43
  • 57
andy
  • 11
  • 3
  • In the last loop you repeat the whole procedure 256 times. Is it intended? – Alexey S. Larionov Dec 10 '21 at 13:48
  • 2
    This is a problem you need to solve with math, not brute force. You need to compute the length the list *would* have, not actually build a giant list and call `len` on it. – user2357112 Dec 10 '21 at 13:53
  • 1
    @ScottHunter: The list grows exponentially. Running out of memory is expected. – user2357112 Dec 10 '21 at 13:53
  • @ScottHunter That's not helpful, and it does make sense that memory consumption increases _a lot_. The list quickly extends up to the millions of items. – Bram Vanroy Dec 10 '21 at 13:55
  • Do you need the actual list being returned by `convert`, or just its length? – Scott Hunter Dec 10 '21 at 15:06
  • It might be possible, if you do need the actual list, to make a generator to deliver the elements of the value returned by `convert` *for a given iteration*, avoiding the need to actually store the whole list. – Scott Hunter Dec 10 '21 at 15:56
  • To clarify: This is day 6 challenge part 2 in Advent of Code. The above code worked for 80 iterations (days). The question is how many items in the list after the last iteration (256:th). – andy Dec 10 '21 at 17:57
  • https://adventofcode.com/ – andy Dec 10 '21 at 18:07

3 Answers3

3

To answer the question: to turn a function into a generator function, all you have to do is yield something. You might do it like this:

def convert(data):
    for index in range(len(data)):
        ...

        yield data

Then, you can iterate over the output like this:

iter_converted_datas = convert(data)

for _, converted in zip(range(256), iter_converted_datas):
    print(len(converted))

I also would suggest some improvements to this code. The first thing that jumps out at me, is to get rid of all those elif statements.

One helpful thing for this might be to supply a dictionary argument to your generator function that tells it how to convert the data values (the first one is a special case since it also appends).

Here is what that dict might look like:

replacement_dict = {
    0: 6,
    1: 0,
    2: 1,
    3: 2,
    4: 3,
    5: 4,
    6: 5,
    7: 6,
    8: 7,
}

By the way: replacing a series of elif statements with a dictionary is a pretty typical thing to do in python. It isn't always appropriate, but it often works well.

Now you can write your generator like this:

def convert(data, replacement_dict):
    for index in range(len(data)):
        if index==0:
            lst.append(8)
        data[index] = replacement_dict[index]
        yield data

And use it like this:

iter_converted_datas = convert(data, replacement_dict)

for _, converted in enumerate(iter_converted_datas):
    print(len(converted))

But we haven't yet addressed the underlying memory problem.

For that, we need to step back a second: the reason your memory is filling up is you have created a routine that grows very large very fast. And if you were to keep going beyond 256 iterations, the list would get longer without end.

If you want to compute the Xth output for some member of the list without storing the entire list into memory, you have to change things around quite a bit.

My suggestion on how you might get started: create a function to get the Xth iteration for any starting input value.

Here is a generator that just produces outputs based on the replacement dict. Depending on the contents of the replacement dict, this could be infinite, or it might have an end (in which case it would raise a KeyError). In your case, it is infinite.

def process_replacements(value, replacement_dict):
    while True:
        yield (value := replacement_dict[value])        

Next we can write our function to process the Xth iteration for a starting value:

def process_xth(value, xth, replacement_dict):
    # emit the xth value from the original value
    for _, value in zip(range(xth), process_replacements(value, replacement_dict)):
        pass
    return value

Now you can process the Xth iteration for any value in your starting data list:

index = 0
xth = 256
process_xth(data[index], xth, data, replacement_dict)

However, we have not appended 8 to the data list anytime we encounter the 0 value. We could do this, but as you have discovered, eventually the list of 8s will get too big. Instead, what we need to do is keep COUNT of how many 8s we have added to the end.

So I suggest adding a zero_tracker function to increment the count:

def zero_tracker():
    global eights_count
    eights_count += 1

Now you can call that function in the generator every time a zero is encountered, but resetting the global eights_count to zero at the start of the iteration:

def process_replacements(value, replacement_dict):
    global eights_count
    eights_count = 0
    while True:
        if value == 0:
            zero_tracker()
        yield (value := replacement_dict[value])        

Now, for any Xth iteration you perform at some point in the list, you can know how many 8s were appended at the end, and when they were added.

But unfortunately simply counting the 8s isn't enough to get the final sequence; you also have to keep track of WHEN (ie, which iteration) they were added to the sequence, so you can know how deeply to iterate them. You could store this in memory pretty efficiently by keeping track of each iteration in a dictionary; that dictionary would look like this:

eights_dict = {
    # iteration:    count of 8s
}

And of course you can also calculate what each of these 8s will become at any arbitrary depth:

depth = 1
process_xth(8, depth, data, replacement_dict)

Once you know how many 8s there are added for every iteration given some finite number of Xth iterations, you can construct the final sequence by just yielding the correct value the right number of times over and over again, in a generator, without storing anything. I leave it to you to figure out how to construct your eights_dict and do this final part. :)

Rick
  • 43,029
  • 15
  • 76
  • 119
  • How does this address the list becoming too big to fit in memory? – Scott Hunter Dec 10 '21 at 15:07
  • @ScottHunter it doesn't. I am pretty sure that that wasn't the actual question.... I read between the lines. :) OP starts with all the "data" in memory to begin with, so. The actual complaint is the processing time. – Rick Dec 10 '21 at 15:52
  • "The problem right now is that my memory gets full." OP starts with a list of *5* numbers (which is, in fact, all the data). – Scott Hunter Dec 10 '21 at 15:53
  • Pretty sure the problem is that the program gets killed before it gets to the 256th iteration. – Scott Hunter Dec 10 '21 at 15:58
  • @ScottHunter well the deeper problem here is it is an infinite loop. It will never "complete". That's why the memory gets full. I'll point that out in the answer. – Rick Dec 10 '21 at 15:59
  • How is `for i in range(256):` an infinite loop?!? – Scott Hunter Dec 10 '21 at 16:00
  • @ScottHunter well I meant the logic is infinite, but I see your point. – Rick Dec 10 '21 at 16:04
  • @ScottHunter thanks for poking at me about this; you were absolutely correct. I had ignored the crux of the question. It is fixed now. :) – Rick Dec 10 '21 at 17:14
0

Here are a few things you can do to optimize it:

Instead of range(len(data)) you can use enumerate(data). This gives you access to both the element AND it's index. Example:

EDIT: According to this post, range is faster than enumerate. If you care about speed, you could ignore this change.

for index, element in enumerate(data):
    if element == 0:
        data[index] = 6

Secondly, most of the if statements have a predictable pattern. So you can rewrite them like this:

def convert(data):
    for idx, elem in enumerate(data):
        if elem == 0:
            data[idx] = 6
            data.append(8)

        if elem <= 8:
            data[index] = elem - 1

Since lists are mutable, you don't need to return data. It modifies it in-place.

TheNightHawk
  • 505
  • 2
  • 7
0

I see that you ask about generator functions, but that ain't solve your memory issues. You run out of memory because, well, you keep everything in memory...

The memory complexity of your solution is O*((8/7)^n) where n is a number of calls to convert. This is because every time you call convert(), the data structure gets expanded with 1/7 of its elements (on average). This is the case because every number in your structure has (roughly) a 1/7 probability of being zero.

So memory complexity is O*((8/7)^n), hence exponential. But can we do better?

Yes we can (assuming that the conversion function remains this "nice and predictable"). We can keep in memory just the number of zeros that were present in a structure when we called a convert(). That way, we will have a linear memory complexity O*(n). Does that come with a cost?

Yes. Element access time no longer has a constant complexity O(1) but it has linear complexity O(n) where n is a number of calls to convert() (At least that's what I came up with). But it resolves out-of-memory issue.

I also assumed that there would be need to iterate over the computed list. If you are only interested in the length, it is sufficient to keep count of digits in a number and work over those. That way you would use just a few integers of memory.

Here is a code:

from copy import deepcopy  # to keep original list untouched ;)

class Data:
    def __init__(self, seed):
        self.seed = deepcopy(seed)
        self.iteration = 0
        self.zero_counts = list()
        self.len = len(seed)

    def __len__(self):
        return self.len

    def __iter__(self):
        return SeededDataIterator(self)

    def __repr__(self):
        """not necessary for a solution, but helps with debugging"""
        return "[" + (", ".join(f"{n}" for n in self)) + "]"

    def __getitem__(self, index: int):
        if index >= self.len:
            raise IndexError

        if index < len(self.seed):
            ret = self.seed[index] - self.iteration
        else:
            inner_it_idx = index - len(self.seed)
            for i, cnt in enumerate(self.zero_counts):
                if inner_it_idx < cnt:
                    ret = 9 + i - self.iteration
                    break
                else:
                    inner_it_idx -= cnt

        ret = ret if ret > 6 else ret % 7
        return ret

    def convert(self):
        zero_count = sum((self[i] == 0) for i, _ in enumerate(self.seed))

        for i, count in enumerate(self.zero_counts):
            i = 9 + i - self.iteration
            i = i if i > 6 else i % 7
            if i == 0:
                zero_count += count

        self.zero_counts.append(zero_count)
        self.len += self.zero_counts[self.iteration]
        self.iteration += 1


class DataIterator:
    """Iterator class for the Data class"""
    def __init__(self, seed_data):
        self.seed_data = seed_data
        self.index = 0
    
    def __next__(self):
        if self.index >= self.seed_data.len:
            raise StopIteration

        ret = self.seed_data[self.index]
        self.index += 1
        return ret

There is code that tests logical equality and prints required output:

original_data = [3,4,3,1,2]
data = deepcopy(original_data)
d = Data(data)

for _ in range(30):
    output = convert(data)
    d.convert()
    print("---------------------------------------")
    print(len(output))
    assert len(output) == len(d)
    for i, e in enumerate(output):
        assert e == d[i]

data = deepcopy(original_data)
d = Data(data)
for _ in range(256):
    d.convert()
    print(len(d))

Results after your program crashed are:

1516813106
1662255394 <<< Killed here
1806321765
1976596756
2153338313
2348871138
2567316469
2792270106
3058372242
3323134871
3638852150
3959660078
4325467894
4720654782
5141141244
5625688711
6115404977
6697224392
7282794949
7964320044
8680314860
9466609138
10346343493
11256546221
12322913103
13398199926
14661544436
15963109809
17430929182
19026658353
20723155359
22669256596
24654746147
26984457539
Vojtěch Chvojka
  • 378
  • 1
  • 15