To train a tensorflow model, I'm loading a custom dataset from a google cloud platform bucket as follows:
GCP_BUCKET = "stereo-train"
tfc.run(
requirements_txt="requirements.txt",
chief_config=tfc.MachineConfig(
cpu_cores=8,
memory=30,
accelerator_type=tfc.AcceleratorType.NVIDIA_TESLA_T4,
accelerator_count=1,
),
docker_image_bucket_name=GCP_BUCKET,
)
kitti = "gs://stereo-train/data_scene_flow"
kitti_train = str(kitti + "/training/dat/data/")
img_height = 375
img_width = 1242
feature_size = 32
batch_size = 6
filenames = np.sort(np.asarray(os.listdir(kitti_train))).tolist()
# Make a Dataset of image tensors by reading and decoding the files.
ds = list(map(lambda x: tf.io.decode_image(tf.io.read_file(kitti_train + x)), filenames))
But the google cloud platform console, gives me the following error:
FileNotFoundError: [Errno 2] No such file or directory: 'gs://stereo-train/data_scene_flow/training/dat/data/'
The stereo-train bucket does exist with the directory hierarchy.