16

I'm following this guide.

It shows how to download datasets from the new TensorFlow Datasets using tfds.load() method:

import tensorflow_datasets as tfds    
SPLIT_WEIGHTS = (8, 1, 1)
splits = tfds.Split.TRAIN.subsplit(weighted=SPLIT_WEIGHTS)

(raw_train, raw_validation, raw_test), metadata = tfds.load(
    'cats_vs_dogs', split=list(splits),
    with_info=True, as_supervised=True)

The next steps shows how to apply a function to each item in the dataset using map method:

def format_example(image, label):
    image = tf.cast(image, tf.float32)
    image = image / 255.0
    # Resize the image if required
    image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
    return image, label

train = raw_train.map(format_example)
validation = raw_validation.map(format_example)
test = raw_test.map(format_example)

Then to access the elements we can use:

for features in ds_train.take(1):
  image, label = features["image"], features["label"]

OR

for example in tfds.as_numpy(train_ds):
  numpy_images, numpy_labels = example["image"], example["label"]

However, the guide doesn't mention anything about data augmentation. I want to use real time data augmentation similar to that of Keras's ImageDataGenerator Class. I tried using:

if np.random.rand() > 0.5:
    image = tf.image.flip_left_right(image)

and other similar augmentation functions in format_example() but, how can I verify that it's performing real time augmentation and not replacing the original image in the dataset?

I could convert the complete dataset to Numpy array by passing batch_size=-1 to tfds.load() and then use tfds.as_numpy() but, that would load all the images in memory which is not needed. I should be able to use train = train.prefetch(tf.data.experimental.AUTOTUNE) to load just enough data for next training loop.

KT12
  • 549
  • 11
  • 24
himanshurawlani
  • 353
  • 1
  • 3
  • 9
  • You may want to see [this answer](https://stackoverflow.com/a/55754700/10886420) as well, it presents data after augmentation so you can be __even more sure__ it's working (and the example is more convincing anyway). – Szymon Maszke Apr 18 '19 at 23:05

1 Answers1

20

You are approaching the problem from a wrong direction.

First, download data using tfds.load, cifar10 for example (for simplicity we will use default TRAIN and TEST splits):

import tensorflow_datasets as tfds

dataloader = tfds.load("cifar10", as_supervised=True)
train, test = dataloader["train"], dataloader["test"]

(you can use custom tfds.Split objects to create validations datasets or other, see documentation)

train and test are tf.data.Dataset objects so you can use map, apply, batch and similar functions to each of those.

Below is an example, where I will (using tf.image mostly):

  • convert each image to tf.float64 in the 0-1 range (don't use this stupid snippet from official docs, this way ensures correct image format)
  • cache() results as those can be re-used after each repeat
  • randomly flip left_to_right each image
  • randomly change contrast of image
  • shuffle data and batch
  • IMPORTANT: repeat all the steps when dataset is exhausted. This means that after one epoch all of the above transformations are applied again (except for the ones which were cached).

Here is the code doing the above (you can change lambdas to functors or functions):

train = train.map(
    lambda image, label: (tf.image.convert_image_dtype(image, tf.float32), label)
).cache().map(
    lambda image, label: (tf.image.random_flip_left_right(image), label)
).map(
    lambda image, label: (tf.image.random_contrast(image, lower=0.0, upper=1.0), label)
).shuffle(
    100
).batch(
    64
).repeat()

Such tf.data.Dataset can be passed directly to Keras's fit, evaluate and predict methods.

Verifying it actually works like that

I see you are highly suspicious of my explanation, let's go through an example:

1. Get small subset of data

Here is one way to take a single element, admittedly unreadable and unintuitive, but you should be fine with it if you do anything with Tensorflow:

# Horrible API is horrible
element = tfds.load(
    # Take one percent of test and take 1 element from it
    "cifar10",
    as_supervised=True,
    split=tfds.Split.TEST.subsplit(tfds.percent[:1]),
).take(1)

2. Repeat data and check whether it is the same:

Using Tensorflow 2.0 one can actually do it without stupid workarounds (almost):

element = element.repeat(2)
# You can iterate through tf.data.Dataset now, finally...
images = [image[0] for image in element]
print(f"Are the same: {tf.reduce_all(tf.equal(images[0], images[1]))}")

And it unsurprisingly returns:

Are the same: True

3. Check whether data differs after each repeat with random augmentation

Below snippet repeats single element 5 times and checks which are equal and which are different.

element = (
    tfds.load(
        # Take one percent of test and take 1 element
        "cifar10",
        as_supervised=True,
        split=tfds.Split.TEST.subsplit(tfds.percent[:1]),
    )
    .take(1)
    .map(lambda image, label: (tf.image.random_flip_left_right(image), label))
    .repeat(5)
)

images = [image[0] for image in element]

for i in range(len(images)):
    for j in range(i, len(images)):
        print(
            f"{i} same as {j}: {tf.reduce_all(tf.equal(images[i], images[j]))}"
        )

Output (in mine case, each run would be different):

0 same as 0: True
0 same as 1: False
0 same as 2: True
0 same as 3: False
0 same as 4: False
1 same as 1: True
1 same as 2: False
1 same as 3: True
1 same as 4: True
2 same as 2: True
2 same as 3: False
2 same as 4: False
3 same as 3: True
3 same as 4: True
4 same as 4: True

You could cast each of those images to numpy as well and see the images for yourself using skimage.io.imshow, matplotlib.pyplot.imshow or other alternatives.

Another example of visualization of real-time data augmentation

This answer provides a more comprehensive and readable view on data augmentation using Tensorboard and MNIST, might want to check that one out (yeah, shameless plug, but useful I guess).

Szymon Maszke
  • 22,747
  • 4
  • 43
  • 83
  • From the documentation of the map function [here](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map): This transformation applies `map_func` to each element of this dataset, and returns a new dataset containing the transformed elements, in the same order as they appeared in the input. – himanshurawlani Apr 13 '19 at 11:03
  • Indeed it does. Check __IMPORTANT:__ part I have added just now. Basically, each augmentation is applied to each part of data (single element in this case, could be batch if `batch()` was used before it, it should be faster that way) on the fly and it's returned with or without augmentation (if random). When `tf.data.Dataset` is exhausted and `repeat` is used (in order to train for multiple epochs/indefinitely) all the operations are repeated except for the ones we have cached during first pass. Does it clear the confusion? – Szymon Maszke Apr 13 '19 at 14:25
  • Okay so, how can I verify that all the operations are repeated when I use `repeat` ? – himanshurawlani Apr 14 '19 at 17:59
  • 2
    I see you don't have much faith in `tensorflow`, I can understand that. I have added an example which compares image before and after `random_flip_left_right`. You can make your own more extensive tests in this manner if you wish. – Szymon Maszke Apr 14 '19 at 20:07
  • 2
    Thanks for the example! Things are much more clear after the verification step. – himanshurawlani Apr 15 '19 at 11:31