476

What is the difference between epoch and iteration when training a multi-layer perceptron?

kmario23
  • 57,311
  • 13
  • 161
  • 150
mohammad
  • 4,905
  • 4
  • 16
  • 13

14 Answers14

663

In the neural network terminology:

  • one epoch = one forward pass and one backward pass of all the training examples
  • batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.
  • number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes).

For example: if you have 1000 training examples, and your batch size is 500, then it will take 2 iterations to complete 1 epoch.

FYI: Tradeoff batch size vs. number of iterations to train a neural network


The term "batch" is ambiguous: some people use it to designate the entire training set, and some people use it to refer to the number of training examples in one forward/backward pass (as I did in this answer). To avoid that ambiguity and make clear that batch corresponds to the number of training examples in one forward/backward pass, one can use the term mini-batch.

Innat
  • 16,113
  • 6
  • 53
  • 101
Franck Dernoncourt
  • 77,520
  • 72
  • 342
  • 501
  • 47
    I'm confused. Why would you train for more than one epoch - on all the data more than once? Wouldn't that lead to overfitting? – Soubriquet Oct 15 '16 at 13:35
  • 42
    @Soubriquet Neural networks are typically trained using an iterative optimization method (most of the time, gradient descent), which often needs to perform several passes on the training set to obtain good results. – Franck Dernoncourt Oct 15 '16 at 15:54
  • 3
    Hmm...so is this the reason for using early stopping and a validation set when training? – Soubriquet Oct 15 '16 at 16:03
  • 8
    But if there are a lot f training samples, say $1$ million, would just one epoch be enough? What do people typically do if the training set is very huge? Just divide the training set into batches and just perform one epoch? – pikachuchameleon Jan 09 '17 at 16:45
  • 10
    @pikachuchameleon This depends on the complexity of the task: one epoch can be indeed be enough in some cases. – Franck Dernoncourt Jan 09 '17 at 17:07
  • 1
    @pikachuchameleon, I think you'll understand better if you read the wikipedia on gradient descent. As Soubriquet mentioned it's an iterative optimization method. So after only one epoch, all of the weights of a neural netwok model will have been updated once. At this point they will still be very close to their initialized weights, which are very unlikely to to be the optimal weights. Each epoch, gradient descent takes one "step" closer to a local optimum. But to ensure convergence, these "steps" are usually set to a very small distance (taken in the gradient direction). – Max Power Feb 06 '17 at 18:56
  • 14
    @MaxPower - typically, the step is taken after each *iteration*, as Franck Dernoncourt's answer implied; that's what we do with the information from the backwards pass. In a mini-batch gradient descent with *m* iterations per epoch, we update the parameters *m* times per epoch. – dan mackinlay Feb 17 '17 at 03:14
  • 3
    @FranckDernoncourt if I am not wrong, we need to calculate the derivatives for each input in a mini batch so doesn't that mean one forward and backward pass for each input in the mini-batch instead of the one forward and one backward pass for the whole batch which you mentioned? – jbojcic May 26 '17 at 09:06
  • 1
    @jbojcic That's a good point. The term "batch" is ambiguous: some people use it to designate the entire training set, and some people use it to refer to the number of training examples in one forward/backward pass (as I did in this answer). To avoid that ambiguity and make clear that batch corresponds to the number of training examples in one forward/backward pass, one can use the term "mini-batch". – Franck Dernoncourt May 27 '17 at 19:03
  • 2
    @FranckDernoncourt not sure I understand. To be more concrete I'll try to explain on the example you provided above. So there are 1000 training examples and batch size is 500 so there will be 2 updates of the weights and biases in each epoch. But when calculating derivatives for one batch (500 training examples), for each one out of 500 training example we have to calculate derivatives (one backward pass) and activations (one forward pass). So isn't that 500 forward and 500 backward passes per batch? – jbojcic May 28 '17 at 10:16
  • 6
    @jbobcic this is well explained here: https://www.coursera.org/learn/machine-learning/lecture/9zJUs/mini-batch-gradient-descent – Ziofil Sep 20 '17 at 02:26
176

Epoch and iteration describe different things.


Epoch

An epoch describes the number of times the algorithm sees the entire data set. So, each time the algorithm has seen all samples in the dataset, an epoch has been completed.

Iteration

An iteration describes the number of times a batch of data passed through the algorithm. In the case of neural networks, that means the forward pass and backward pass. So, every time you pass a batch of data through the NN, you completed an iteration.


Example

An example might make it clearer.

Say you have a dataset of 10 examples (or samples). You have a batch size of 2, and you've specified you want the algorithm to run for 3 epochs.

Therefore, in each epoch, you have 5 batches (10/2 = 5). Each batch gets passed through the algorithm, therefore you have 5 iterations per epoch. Since you've specified 3 epochs, you have a total of 15 iterations (5*3 = 15) for training.

Innat
  • 16,113
  • 6
  • 53
  • 101
Khon Lieu
  • 4,295
  • 7
  • 37
  • 39
  • 18
    Can you please explain if the weights are updated after every epoch or after every iteration? – Inherited Geek Jul 08 '17 at 11:11
  • 11
    @InheritedGeek the weights are updated after each batch not epoch or iteration. – thisisbhavin Feb 03 '18 at 14:31
  • 2
    @bhavindhedhi 1 batch = 1 iteration, isn't it? – Bee Feb 25 '18 at 18:34
  • 3
    @Bee No, take for example 10000 training samples and 1000 samples per batch then it will take 10 iterations to complete 1 epoch. – thisisbhavin Feb 28 '18 at 07:03
  • 2
    In addition to the previous comment, if your batch size is same as the total number of training samples then 1 epoch = 1 iteration. – thisisbhavin Feb 28 '18 at 07:54
  • 9
    @bhavindhedhi I think what Bee was asking is that in your example of 10000 total samples with 1000 per batch, you effectively have 10 total batches, which is equal to 10 iterations. I think that makes sense, but not sure if that's a proper way of interpreting it. – Michael Du Apr 01 '18 at 03:52
  • @Khon What if I give batch size that is not exactly divisible by number of examples? Suppose I give batch size 3 for 4 epochs in your example. Then? – Ankit Seth Jun 08 '18 at 06:29
  • 1
    @AnkitSeth then you'll end up with an extra batch that's smaller than the others. If you know the size of your dataset you'll want to specify it so that each batch is as equal as possible. – Khon Lieu Jun 08 '18 at 18:20
27

Many neural network training algorithms involve making multiple presentations of the entire data set to the neural network. Often, a single presentation of the entire data set is referred to as an "epoch". In contrast, some algorithms present data to the neural network a single case at a time.

"Iteration" is a much more general term, but since you asked about it together with "epoch", I assume that your source is referring to the presentation of a single case to a neural network.

Predictor
  • 984
  • 6
  • 9
25

To understand the difference between these you must understand the Gradient Descent Algorithm and its Variants.

Before I start with the actual answer, I would like to build some background.

A batch is the complete dataset. Its size is the total number of training examples in the available dataset.

mini-batch size is the number of examples the learning algorithm processes in a single pass (forward and backward).

A Mini-batch is a small part of the dataset of given mini-batch size.

Iterations is the number of batches of data the algorithm has seen (or simply the number of passes the algorithm has done on the dataset).

Epochs is the number of times a learning algorithm sees the complete dataset. Now, this may not be equal to the number of iterations, as the dataset can also be processed in mini-batches, in essence, a single pass may process only a part of the dataset. In such cases, the number of iterations is not equal to the number of epochs.

In the case of Batch gradient descent, the whole batch is processed on each training pass. Therefore, the gradient descent optimizer results in smoother convergence than Mini-batch gradient descent, but it takes more time. The batch gradient descent is guaranteed to find an optimum if it exists.

Stochastic gradient descent is a special case of mini-batch gradient descent in which the mini-batch size is 1.

Batch gradient descent vs Mini-batch gradient descent

Comparison of batch, stochastic and mini-batch gradient descents.

Innat
  • 16,113
  • 6
  • 53
  • 101
nikhilbalwani
  • 817
  • 7
  • 20
  • Small correction: a batch is not the complete dataset. It represents a chunk of samples used for doing a single forward and backward pass. e.g. batch with size 32 means, 32 samples from dataset are used for calculating the error and applying a back prop. Say we have 320 samples in the dataset. In this case we have 10 batches each with a size of 32 – Asil Apr 25 '22 at 09:01
21

I guess in the context of neural network terminology:

  • Epoch: When your network ends up going over the entire training set (i.e., once for each training instance), it completes one epoch.

In order to define iteration (a.k.a steps), you first need to know about batch size:

  • Batch Size: You probably wouldn't like to process the entire training instances all at one forward pass as it is inefficient and needs a huge deal of memory. So what is commonly done is splitting up training instances into subsets (i.e., batches), performing one pass over the selected subset (i.e., batch), and then optimizing the network through backpropagation. The number of training instances within a subset (i.e., batch) is called batch_size.

  • Iteration: (a.k.a training steps) You know that your network has to go over all training instances in one pass in order to complete one epoch. But wait! when you are splitting up your training instances into batches, that means you can only process one batch (a subset of training instances) in one forward pass, so what about the other batches? This is where the term Iteration comes into play:

  • Definition: The number of forwarding passes (The number of batches that you have created) that your network has to do in order to complete one epoch (i.e., going over all training instances) is called Iteration.

For example, when you have 10,000 training instances and you want to do batching with the size of 10; you have to do 10,000/10 = 1,000 iterations to complete 1 epoch.

Hope this could answer your question!

Innat
  • 16,113
  • 6
  • 53
  • 101
inverted_index
  • 2,329
  • 21
  • 40
  • So, when I train a model with all data in epoch=1, why we use data in more loops? What will change during these epochs? – Mahdi Amrollahi Nov 14 '20 at 06:43
  • @MahdiAmrollahi Generally speaking, neural methods need more than one epoch to find the optimal training point. In practice, your algorithm will need to meet each data point multiple times to properly learn it. That's why we have the concept of "epoch" here, and when epoch > 1 (let's say 2), it means that your algorithm has met each of the training data points twice. – inverted_index Jan 23 '21 at 00:24
  • Can you tell me that what is the difference between steps and iterations because the iterations concept you are saying , i have read steps in epoch – Hamza Jun 27 '21 at 10:12
  • @Hamza Every time that you pass a **batch** of data (i.e., subset of the entire data), you complete one iteration/[training] step Iteration and [training] steps are identical concepts in this terminology. – inverted_index Jun 28 '21 at 14:28
15

You have training data which you shuffle and pick mini-batches from it. When you adjust your weights and biases using one mini-batch, you have completed one iteration.

Once you run out of your mini-batches, you have completed an epoch. Then you shuffle your training data again, pick your mini-batches again, and iterate through all of them again. That would be your second epoch.

Innat
  • 16,113
  • 6
  • 53
  • 101
Milad P.
  • 4,707
  • 3
  • 12
  • 9
10

Typically, you'll split your test set into small batches for the network to learn from, and make the training go step by step through your number of layers, applying gradient-descent all the way down. All these small steps can be called iterations.

An epoch corresponds to the entire training set going through the entire network once. It can be useful to limit this, e.g. to fight to overfit.

Innat
  • 16,113
  • 6
  • 53
  • 101
Nikana Reklawyks
  • 3,233
  • 3
  • 33
  • 49
10

To my understanding, when you need to train a NN, you need a large dataset that involves many data items. when NN is being trained, data items go into NN one by one, that is called an iteration; When the whole dataset goes through, it is called an epoch.

Innat
  • 16,113
  • 6
  • 53
  • 101
36Kr
  • 121
  • 1
  • 5
7

I believe iteration is equivalent to a single batch forward+backprop in batch SGD. Epoch is going through the entire dataset once (as someone else mentioned).

Innat
  • 16,113
  • 6
  • 53
  • 101
Andrei Pokrovsky
  • 3,590
  • 3
  • 26
  • 17
6

An epoch contains a few iterations. That's actually what this epoch is. Let's define epoch as the number of iterations over the data set in order to train the neural network.

Innat
  • 16,113
  • 6
  • 53
  • 101
Ilya Saunkin
  • 18,934
  • 9
  • 36
  • 50
5
  1. Epoch is 1 complete cycle where the Neural network has seen all the data.

  2. One might have said 100,000 images to train the model, however, memory space might not be sufficient to process all the images at once, hence we split training the model on smaller chunks of data called batches. e.g. batch size is 100.

  3. We need to cover all the images using multiple batches. So we will need 1000 iterations to cover all the 100,000 images. (100 batch size * 1000 iterations)

  4. Once Neural Network looks at the entire data it is called 1 Epoch (Point 1). One might need multiple epochs to train the model. (let us say 10 epochs).

Innat
  • 16,113
  • 6
  • 53
  • 101
rishi jain
  • 1,524
  • 1
  • 19
  • 26
3

An epoch is an iteration of a subset of the samples for training, for example, the gradient descent algorithm in a neural network. A good reference is: http://neuralnetworksanddeeplearning.com/chap1.html

Note that the page has a code for the gradient descent algorithm which uses epoch

def SGD(self, training_data, epochs, mini_batch_size, eta,
        test_data=None):
    """Train the neural network using mini-batch stochastic
    gradient descent.  The "training_data" is a list of tuples
    "(x, y)" representing the training inputs and the desired
    outputs.  The other non-optional parameters are
    self-explanatory.  If "test_data" is provided then the
    network will be evaluated against the test data after each
    epoch, and partial progress printed out.  This is useful for
    tracking progress, but slows things down substantially."""
    if test_data: n_test = len(test_data)
    n = len(training_data)
    for j in xrange(epochs):
        random.shuffle(training_data)
        mini_batches = [
            training_data[k:k+mini_batch_size]
            for k in xrange(0, n, mini_batch_size)]
        for mini_batch in mini_batches:
            self.update_mini_batch(mini_batch, eta)
        if test_data:
            print "Epoch {0}: {1} / {2}".format(
                j, self.evaluate(test_data), n_test)
        else:
            print "Epoch {0} complete".format(j)

Look at the code. For each epoch, we randomly generate a subset of the inputs for the gradient descent algorithm. Why epoch is effective is also explained on the page. Please take a look.

Innat
  • 16,113
  • 6
  • 53
  • 101
ABCD
  • 7,914
  • 9
  • 54
  • 90
2

According to Google's Machine Learning Glossary, an epoch is defined as

"A full training pass over the entire dataset such that each example has been seen once. Thus, an epoch represents N/batch_size training iterations, where N is the total number of examples."

If you are training model for 10 epochs with batch size 6, given total 12 samples that means:

  1. the model will be able to see the whole dataset in 2 iterations ( 12 / 6 = 2) i.e. single epoch.

  2. overall, the model will have 2 X 10 = 20 iterations (iterations-per-epoch X no-of-epochs)

  3. re-evaluation of loss and model parameters will be performed after each iteration!

Innat
  • 16,113
  • 6
  • 53
  • 101
Divi
  • 101
  • 1
  • 3
1

epoch

A full training pass over the entire dataset such that each example has been seen once. Thus, an epoch represents N/batch size training iterations, where N is the total number of examples.

iteration

A single update of a model's weights during training. An iteration consists of computing the gradients of the parameters with respect to the loss on a single batch of data.

as bonus:

batch

The set of examples used in one iteration (that is, one gradient update) of model training.

See also batch size.

source: https://developers.google.com/machine-learning/glossary/

Community
  • 1
  • 1
Mathieu Gemard
  • 514
  • 7
  • 11