I'm using the MRI Preprocessed Dataset from Kaggle for my CNN project. This dataset contains .jpg images with dimensions of 128 x 128 pixels. In some references, this dataset is used with an image_size declaration of 224 x 224 pixels. How is the method used by TensorFlow for this resizing possible to implement? Can TensorFlow detect the color of a certain pixel and divide it into several pixels independently?
batch_size = 32
img_height = 224
img_width = 224
from tensorflow.keras.utils import image_dataset_from_directory
from tensorflow.keras.utils import image_dataset_from_directory
train_data = image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_data = image_dataset_from_directory(data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height,img_width),
batch_size=batch_size)
I have used this code and the program can still run even though the image used has a size of 128 x 128 pixels. The program can still run and you see some changes when the img_height and img_width are changed between 128 x 128 and 224 x 224 pixels, so I think in this block of code the image size change occurs.