25

I built a simple generator that yields a tuple(inputs, targets) with only single items in the inputs and targets lists. Basically, it is crawling the data set, one sample item at a time.

I pass this generator into:

  model.fit_generator(my_generator(),
                      nb_epoch=10,
                      samples_per_epoch=1,
                      max_q_size=1  # defaults to 10
                      )

I get that:

  • nb_epoch is the number of times the training batch will be run
  • samples_per_epoch is the number of samples trained with per epoch

But what is max_q_size for and why would it default to 10? I thought the purpose of using a generator was to batch data sets into reasonable chunks, so why the additional queue?

Seanny123
  • 8,776
  • 13
  • 68
  • 124
Ray
  • 40,256
  • 21
  • 101
  • 138

2 Answers2

34

This simply defines the maximum size of the internal training queue which is used to "precache" your samples from generator. It is used during generation of the the queues

def generator_queue(generator, max_q_size=10,
                    wait_time=0.05, nb_worker=1):
    '''Builds a threading queue out of a data generator.
    Used in `fit_generator`, `evaluate_generator`, `predict_generator`.
    '''
    q = queue.Queue()
    _stop = threading.Event()

    def data_generator_task():
        while not _stop.is_set():
            try:
                if q.qsize() < max_q_size:
                    try:
                        generator_output = next(generator)
                    except ValueError:
                        continue
                    q.put(generator_output)
                else:
                    time.sleep(wait_time)
            except Exception:
                _stop.set()
                raise

    generator_threads = [threading.Thread(target=data_generator_task)
                         for _ in range(nb_worker)]

    for thread in generator_threads:
        thread.daemon = True
        thread.start()

    return q, _stop

In other words you have a thread filling the queue up to given, maximum capacity directly from your generator, while (for example) training routine consumes its elements (and sometimes waits for the completion)

 while samples_seen < samples_per_epoch:
     generator_output = None
     while not _stop.is_set():
         if not data_gen_queue.empty():
             generator_output = data_gen_queue.get()
             break
         else:
             time.sleep(wait_time)

and why default of 10? No particular reason, like most of the defaults - it simply makes sense, but you could use different values too.

Construction like this suggests, that authors thought about expensive data generators, which might take time to execture. For example consider downloading data over a network in generator call - then it makes sense to precache some next batches, and download next ones in parallel for the sake of efficiency and to be robust to network errors etc.

lejlot
  • 64,777
  • 8
  • 131
  • 164
  • 2
    Ah, I see, so ideally you never halt training on waiting for the generator to generate results--you have a thread fill the queue up silently in the back while the model is training on the prior fetched samples. – Ray May 02 '16 at 20:07
  • 1
    Yes, this is a perfect scenario. Which obviously depends on the size of the queue and overall system design. – lejlot May 02 '16 at 20:57
2

You might want to pay attention of using max_q_size in combination with fit_generator. In fact, the batch size you declare and use in the generator function will be considered as one single input, which is not the case.

So a batch size of 1000 images and a max_q_size of 2000 will result into a real max_q_size of 2000x1000 = 2,000,000 images, which is not healthy for your memory.

This is the reason why sometimes the Keras model never stop getting increased in the memory until the training process crashes

Pipper Tetsing
  • 403
  • 4
  • 5