I am trying to implement a simple GAN in google collaboratory, After using transforms to normalize the images, I want to view it at the output end to display fake image generated by the generator and real image side by in the dataset once every batch iteration like a video.
transform = transforms.Compose(
[
# Convert a PIL Image or numpy.ndarray to tensor. This transform does not support torchscript.
# Converts a PIL Image or numpy.ndarray (H x W x C) in the range
# [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]
transforms.ToTensor(),
# Normalize a tensor image with mean and standard deviation.
transforms.Normalize((0.5,),(0.5,))
])
dataset = datasets.MNIST(root="dataset/", transform=transform, download=True)
loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
After applying transforms on the dataset it is not in the range of [0,255] anymore. How do we denormalize it and use cv2_imshow
to show that series of real and fake images frame by frame in the same place?
The above image shows the output I get, there are two problems here.
- The image normalization, rendered the image indistinguishable, it is just all black.
- The images are not coming frame by frame in the same place like a video, instead, it is getting printed in a new line.
What approach do I take to solve these issues?